diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000000000000000000000000000000000000..3f10b73d8b3f4423736cb8786fe0adfb2cbabb44 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,4 @@ +*.safetensors filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..3b201a4c85ebf8e30912ba986964b45120127b5d --- /dev/null +++ b/.gitignore @@ -0,0 +1,3 @@ +__pycache__ +temp/ +*.log \ No newline at end of file diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/LICENSE b/custom_nodes/ComfyUI-AnimateDiff-Evolved/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..1669c289b17a88bbfa1c927c31379de8320572a3 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2023 Jedrzej Kosinski + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/README.md b/custom_nodes/ComfyUI-AnimateDiff-Evolved/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d1b15ceccad0933de62600b815f66f64426b45dd --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/README.md @@ -0,0 +1,495 @@ +# AnimateDiff for ComfyUI + +Improved [AnimateDiff](https://github.com/guoyww/AnimateDiff/) integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. + +AnimateDiff workflows will often make use of these helpful node packs: +- [ComfyUI_FizzNodes](https://github.com/FizzleDorf/ComfyUI_FizzNodes) for prompt-travel functionality with the BatchPromptSchedule node. Maintained by FizzleDorf. +- [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet) for making ControlNets work with Context Options and controlling which latents should be affected by the ControlNet inputs. Includes SparseCtrl support. Maintained by me. +- [ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite) for loading videos, combining images into videos, and doing various image/latent operations like appending, splitting, duplicating, selecting, or counting. Actively maintained by AustinMroz and I. +- [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux) for ControlNet preprocessors not present in vanilla ComfyUI. Maintained by Fannovel16. +- [ComfyUI_IPAdapter_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) for IPAdapter support. Maintained by cubiq (matt3o). + +# Installation + +## If using ComfyUI Manager: + +1. Look for ```AnimateDiff Evolved```, and be sure the author is ```Kosinkadink```. Install it. +![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/2c7f29e1-d024-49e1-9eb0-d38070142584) + + +## If installing manually: +1. Clone this repo into `custom_nodes` folder. + +# Model Setup: +1. Download motion modules. You will need at least 1. Different modules produce different results. + - Original models ```mm_sd_v14```, ```mm_sd_v15```, ```mm_sd_v15_v2```, ```v3_sd15_mm```: [HuggingFace](https://huggingface.co/guoyww/animatediff/tree/cd71ae134a27ec6008b968d6419952b0c0494cf2) | [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI) | [CivitAI](https://civitai.com/models/108836) + - Stabilized finetunes of mm_sd_v14, ```mm-Stabilized_mid``` and ```mm-Stabilized_high```, by **manshoety**: [HuggingFace](https://huggingface.co/manshoety/AD_Stabilized_Motion/tree/main) + - Finetunes of mm_sd_v15_v2, ```mm-p_0.5.pth``` and ```mm-p_0.75.pth```, by **manshoety**: [HuggingFace](https://huggingface.co/manshoety/beta_testing_models/tree/main) + - Higher resolution finetune,```temporaldiff-v1-animatediff``` by **CiaraRowles**: [HuggingFace](https://huggingface.co/CiaraRowles/TemporalDiff/tree/main) + - FP16/safetensor versions of vanilla motion models, hosted by **continue-revolution** (takes up less storage space, but uses up the same amount of VRAM as ComfyUI loads models in fp16 by default): [HuffingFace](https://huggingface.co/conrevo/AnimateDiff-A1111/tree/main) +2. Place models in one of these locations (you can rename models if you wish): + - ```ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models``` + - ```ComfyUI/models/animatediff_models``` +3. Optionally, you can use Motion LoRAs to influence movement of v2-based motion models like mm_sd_v15_v2. + - [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) | [HuggingFace](https://huggingface.co/guoyww/animatediff) | [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) + - Place Motion LoRAs in one of these locations (you can rename Motion LoRAs if you wish): + - ```ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora``` + - ```ComfyUI/models/animatediff_motion_lora``` +4. Get creative! If it works for normal image generation, it (probably) will work for AnimateDiff generations. Latent upscales? Go for it. ControlNets, one or more stacked? You betcha. Masking the conditioning of ControlNets to only affect part of the animation? Sure. Try stuff and you will be surprised by what you can do. Samples with workflows are included below. + +NOTE: you can also use custom locations for models/motion loras by making use of the ComfyUI ```extra_model_paths.yaml``` file. The id for motion model folder is ```animatediff_models``` and the id for motion lora folder is ```animatediff_motion_lora```. + + +# Features +- Compatible with almost any vanilla or custom KSampler node. +- ControlNet, SparseCtrl, and IPAdapter support +- Infinite animation length support via sliding context windows across whole unet (Context Options) and/or within motion module (View Options) +- Scheduling Context Options to change across different points in the sampling process +- FreeInit and FreeNoise support (FreeInit is under iteration opts, FreeNoise is in SampleSettings' noise_type dropdown) +- Mixable Motion LoRAs from [original AnimateDiff repository](https://github.com/guoyww/animatediff/) implemented. Caveat: the original loras really only work on v2-based motion models like ```mm_sd_v15_v2```, ```mm-p_0.5.pth```, and ```mm-p_0.75.pth```. + - UPDATE: New motion LoRAs without the v2 limitation can now be trained via the [AnimateDiff-MotionDirector repo](https://github.com/ExponentialML/AnimateDiff-MotionDirector). Shoutout to ExponentialML for implementing MotionDirector for AnimateDiff purposes! +- Prompt travel using BatchPromptSchedule node from [ComfyUI_FizzNodes](https://github.com/FizzleDorf/ComfyUI_FizzNodes) +- Scale and Effect multival inputs to control motion amount and motion model influence on generation. + - Can be float, list of floats, or masks +- Custom noise scheduling via Noise Types, Noise Layers, and seed_override/seed_offset/batch_offset in Sample Settings and related nodes +- AnimateDiff model v1/v2/v3 support +- Using multiple motion models at once via Gen2 nodes (each supporting +- [HotshotXL](https://huggingface.co/hotshotco/Hotshot-XL/tree/main) support (an SDXL motion module arch), ```hsxl_temporal_layers.safetensors```. + - NOTE: You will need to use ```autoselect``` or ```linear (HotshotXL/default)``` beta_schedule, the sweetspot for context_length or total frames (when not using context) is 8 frames, and you will need to use an SDXL checkpoint. +- AnimateDiff-SDXL support, with corresponding model. Currently, a beta version is out, which you can find info about at [AnimateDiff](https://github.com/guoyww/AnimateDiff/). + - NOTE: You will need to use ```autoselect``` or ```linear (AnimateDiff-SDXL)``` beta_schedule. Other than that, same rules of thumb apply to AnimateDiff-SDXL as AnimateDiff. +- [AnimateLCM](https://github.com/G-U-N/AnimateLCM) support + - NOTE: You will need to use ```autoselect``` or ```lcm``` or ```lcm[100_ots]``` beta_schedule. To use fully with LCM, be sure to use appropriate LCM lora, use the ```lcm``` sampler_name in KSampler nodes, and lower cfg to somewhere around 1.0 to 2.0. Don't forget to decrease steps (minimum = ~4 steps), since LCM converges faster (less steps). Increase step count to increase detail as desired. +- AnimateDiff Keyframes to change Scale and Effect at different points in the sampling process. +- fp8 support; requires newest ComfyUI and torch >= 2.1 (decreases VRAM usage, but changes outputs) +- Mac M1/M2/M3 support +- Usage of Context Options and Sample Settings outside of AnimateDiff via Gen2 Use Evolved Sampling node + +## Upcoming Features +- Maskable Motion LoRA +- Maskable SD LoRA (and perhaps maskable SD Models as well) +- [PIA](https://github.com/open-mmlab/PIA) support +- Anything else AnimateDiff-related that comes out + + +# Basic Usage And Nodes + +There are two families of nodes that can be used to use AnimateDiff/Evolved Sampling - **Gen1** and **Gen2**. Other than nodes marked specifically for Gen1/Gen2, all other nodes can be used for both Gen1 and Gen2. + +Gen1 and Gen2 produce the exact same results (the backend code is identical), the only difference is in how the modes are used. Overall, Gen1 is the simplest way to use basic AnimateDiff features, while Gen2 separates model loading and application from the Evolved Sampling features. This means in practice, Gen2's Use Evolved Sampling node can be used without a model model, letting Context Options and Sample Settings be used without AnimateDiff. + +In the following documentation, inputs/outputs will be color coded as follows: +- 🟩 - required inputs +- 🟨 - optional inputs +- 🟦 - start as widgets, can be converted to inputs +- 🟪 - output + +## Gen1/Gen2 Nodes + +| ① Gen1 ① | ② Gen2 ② | +|---|---| +| - All-in-One node
- If same model is loaded by multiple Gen1 nodes, duplicates RAM usage. | - Separates model loading from application and Evolved Sampling
- Enables no motion model usage while preserving Evolved Sampling features
- Enables multiple motion model usage with Apply AnimateDiff Model (Adv.) Node| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/a94029fd-5e74-467b-853c-c3ec4cf8a321)| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/8c050151-6cfb-4350-932d-a105af78a1ec)| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c7ae9ef3-b5cd-4800-b249-da2cb73c4c1e)| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/cffa21f7-0e33-45d1-9950-ad22eb229134) | + + +### Inputs +- 🟩*model*: StableDiffusion (SD) Model input. +- 🟦*model_name*: AnimateDiff (AD) model to load and/or apply during the sampling process. Certain motion models work with SD1.5, while others work with SDXL. +- 🟦*beta_schedule*: Applies selected beta_schedule to SD model; ```autoselect``` will automatically select the recommended beta_schedule for selected motion models - or will use_existing if no motion model selected for Gen2. +- 🟨*context_options*: Context Options node from the context_opts submenu - should be used when needing to go back the sweetspot of an AnimateDiff model. Works with no motion models as well (Gen2 only). +- 🟨*sample_settings*: Sample Settings node input - used to apply custom sampling options such as FreeNoise (noise_type), FreeInit (iter_opts), custom seeds, Noise Layers, etc. Works with no motion models as well (Gen2 only). +- 🟨*motion_lora*: For v2-based models, Motion LoRA will influence the generated movement. Only a few official motion LoRAs were released - soon, I will be working with some community members to create training code to create (and test) new Motion LoRAs that might work with non-v2 models. +- 🟨*ad_settings*: Modifies motion models during loading process, allowing the Positional Encoders (PEs) to be adjusted to extend a model's sweetspot or modify overall motion. +- 🟨*ad_keyframes*: Allows scheduling of ```scale_multival``` and ```effect_multival``` inputs across sampling timesteps. +- 🟨*scale_multival*: Uses a ```Multival``` input (defaults to ```1.0```). Previously called motion_scale, it directly influences the amount of motion generated by the model. With the Multival nodes, it can accept a float, list of floats, and/or mask inputs, allowing different scale to be applied to not only different frames, but different areas of frames (including per-frame). +- 🟨*effect_multival*: Uses a ```Multival``` input (defaults to ```1.0```). Determines the influence of the motion models on the sampling process. Value of ```0.0``` is equivalent to normal SD output with no AnimateDiff influence. With the Multival nodes, it can accept a float, list of floats, and/or mask inputs, allowing different effect amount to be applied to not only different frames, but different areas of frames (including per-frame). + +#### Gen2-Only Inputs +- 🟨*motion_model*: Input for loaded motion_model. +- 🟨*m_models*: One (or more) motion models outputted from Apply AnimateDiff Model nodes. + +#### Gen2 Adv.-Only Inputs +- 🟨*prev_m_models*: Previous applied motion models to use alongside this one. +- 🟨*start_percent*: Determines when connected motion_model should take effect (supercedes any ad_keyframes). +- 🟨*end_percent*: Determines when connected motion_model should stop taking effect (supercedes any ad_keyframes). + +#### Gen1 (Legacy) Inputs +- 🟦*motion_scale*: legacy version of ```scale_multival```, can only be a float. +- 🟦*apply_v2_models_properly*: backwards compatible toggle for months-old workflows that used code that did not turn off groupnorm hack for v2 models. **Only affects v2 models, nothing else.** All nodes default this value to ```True``` now. + +### Outputs +- 🟪*MODEL*: Injected SD model with Evolved Sampling/AnimateDiff. + +#### Gen2-Only Outputs +- 🟪*MOTION_MODEL*: Loaded motion model. +- 🟪*M_MODELS*: One (or more) applied motion models, to be either plugged into Use Evolved Sampling or another Apply AnimateDiff Model (Adv.) node. + + +## Multival Nodes + +For Multival inputs, these nodes allow the use of floats, list of floats, and/or masks to use as input. Scaled Mask node allows customization of dark/light areas of masks in terms of what the values correspond to. + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/d4c6a63f-703a-402b-989e-ab4d04141c7a) | 🟨*mask_optional*: Mask for float values - black means 0.0, white means 1.0 (multiplied by float_val).
🟦*float_val*: Float multiplier.| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/bc100bec-0407-47c8-aebd-f74f2417711e) | 🟩*mask*: Mask for float values.
🟦*min_float_val*: Minimum value.
🟦*max_float_val*: Maximum value.
🟦*scaling*: When ```absolute```, black means min_float_val, white means max_float_val. When ```relative```, darkest area in masks (total) means min_float_val, lighest area in massk (total) means max_float_val. | + + +## AnimateDiff Keyframe + +Allows scheduling (in terms of timesteps) for scale_multival and effect_multival. + +The two settings to determine schedule are ***start_percent*** and ***guarantee_steps***. When multiple keyframes have the same start_percent, they will be executed in the order they are connected, and run for guarantee_steps before moving on to the next node. + +| Node | +|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/dca73cdc-157a-47db-bed2-6ba584dceccd) | + +### Inputs +- 🟨*prev_ad_keyframes*: Chained keyframes to create schedule. +- 🟨*scale_multival*: Value of scale to use for this keyframe. +- 🟨*effect_multival*: Value of effect to use for this keyframe. +- 🟦*start_percent*: Percent of timesteps to start usage of this keyframe. If multiple keyframes have same start_percent, order of execution is determined by their chained order, and will last for guarantee_steps timesteps. +- 🟦*guarantee_steps*: Minimum amount of steps the keyframe will be used - when set to 0, this keyframe will only be used when no other keyframes are better matches for current timestep. +- 🟦*inherit_missing*: When set to ```True```, any missing scale_multival or effect_multival inputs will inherit the previous keyframe's values - if the previous keyframe also inherits missing, the last inherited value will be used. + + +## Context Options and View Options + +These nodes provide techniques used to extend the lengths of animations to get around the sweetspot limitations of AnimateDiff models (typically 16 frames) and HotshotXL model (8 frames). + +Context Options works by diffusing portions of the animation at a time, including main SD diffusion, ControlNets, IPAdapters, etc., effectively limiting VRAM usage to be equivalent to be context_length latents. + +View Options, in contrast, work by portioning the latents seen by the motion model. This does NOT decrease VRAM usage, but in general is more stable and faster than Context Options, since the latents don't have to go through the whole SD unet. + +Context Options and View Options can be combined to get the best of both worlds - longer context_length can be used to gain more stable output, at the cost of using more VRAM (since context_length determines how much SD sampling is done at the same time on the GPU). Provided you have the VRAM, you could also use Views Only Context Options to use only View Options (and automatically make context_length equivalent to full latents) to get a speed boost in return for the higher VRAM usage. + +There are two types of Context/View Options: ***Standard*** and ***Looped***. ***Standard*** options do not cause looping in the output. ***Looped*** options, as the name implies, causes looping in the output (from end to beginning). Prior to the code rework, the only context available was the looping kind. + +***I recommend using Standard Static at first when not wanting looped outputs.*** + +In the below animations, ***green*** shows the Contexts, and ***red*** shows the Views. TL;DR green is the amount of latents that are loaded into VRAM (and sampled), while red is the amount of latents that get passed into the motion model at a time. + +### Context Options◆Standard Static +| Behavior | +|---| +| ![anim__00005](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/b26792d6-0f41-4f07-93aa-e5ee83f4d90e)
(latent count: 64, context_length: 16, context_overlap: 4, total steps: 20)| + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/a4a5f38e-3a1b-4328-9537-ad17567aed75) | 🟦*context_length*: Amount of latents to diffuse at once.
🟦*context_overlap*: Minimum common latents between adjacent windows.
🟦*fuse_method*: Method for averaging results of windows.
🟦*use_on_equal_length*: When True, allows context to be used when latent count matches context_length.
🟦*start_percent*: When multiple Context Options are chained, allows scheduling.
🟦*guarantee_steps*: When scheduling contexts, determines the *minimum* amount of sampling steps context should be used.
🟦*context_length*: Amount of latents to diffuse at once.
🟨*prev_context*: Allows chaining of contexts.
🟨*view_options*: When context_length > view_length (unless otherwise specified), allows view_options to be used within each context window.| + +### Context Options◆Standard Uniform +| Behavior | +|---| +| ![anim__00006](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/69707e3d-f49e-4368-89d5-616af2631594)
(latent count: 64, context_length: 16, context_overlap: 4, context_stride: 1, total steps: 20) | +| ![anim__00010](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/7fc083b4-406f-4809-94ca-b389784adcab)
(latent count: 64, context_length: 16, context_overlap: 4, context_stride: 2, total steps: 20) | + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c2c8c7ea-66b6-408d-be46-1d805ecd64d1) | 🟦*context_length*: Amount of latents to diffuse at once.
🟦*context_overlap*: Minimum common latents between adjacent windows.
🟦*context_stride*: Maximum 2^(stride-1) distance between adjacent latents.
🟦*fuse_method*: Method for averaging results of windows.
🟦*use_on_equal_length*: When True, allows context to be used when latent count matches context_length.
🟦*start_percent*: When multiple Context Options are chained, allows scheduling.
🟦*guarantee_steps*: When scheduling contexts, determines the *minimum* amount of sampling steps context should be used.
🟦*context_length*: Amount of latents to diffuse at once.
🟨*prev_context*: Allows chaining of contexts.
🟨*view_options*: When context_length > view_length (unless otherwise specified), allows view_options to be used within each context window.| + +### Context Options◆Looped Uniform +| Behavior | +|---| +| ![anim__00008](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/d08ac1c9-2cec-4c9e-b257-0a804448d41b)
(latent count: 64, context_length: 16, context_overlap: 4, context_stride: 1, closed_loop: False, total steps: 20) | +| ![anim__00009](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/61e0311b-b623-423f-bbcb-eb4eb02e9002)
(latent count: 64, context_length: 16, context_overlap: 4, context_stride: 1, closed_loop: True, total steps: 20) | + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c2c8c7ea-66b6-408d-be46-1d805ecd64d1) | 🟦*context_length*: Amount of latents to diffuse at once.
🟦*context_overlap*: Minimum common latents between adjacent windows.
🟦*context_stride*: Maximum 2^(stride-1) distance between adjacent latents.
🟦*closed_loop*: When True, adds additional windows to enhance looping.
🟦*fuse_method*: Method for averaging results of windows.
🟦*use_on_equal_length*: When True, allows context to be used when latent count matches context_length - allows loops to be made when latent count == context_length.
🟦*start_percent*: When multiple Context Options are chained, allows scheduling.
🟦*guarantee_steps*: When scheduling contexts, determines the *minimum* amount of sampling steps context should be used.
🟦*context_length*: Amount of latents to diffuse at once.
🟨*prev_context*: Allows chaining of contexts.
🟨*view_options*: When context_length > view_length (unless otherwise specified), allows view_options to be used within each context window.| + +### Context Options◆Views Only [VRAM⇈] +| Behavior | +|---| +| ![anim__00011](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/f2e422a4-c894-4e89-8f35-1964b89f369d)
(latent count: 64, view_length: 16, view_overlap: 4, View Options◆Standard Static, total steps: 20) | + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/8cd6a0a4-ee8a-46c3-b04b-a100f87025b3) | 🟩*view_opts_req*: View_options to be used across all latents.
🟨*prev_context*: Allows chaining of contexts.
| + + +There are View Options equivalent of these schedules: + +### View Options◆Standard Static +| Behavior | +|---| +| ![anim__00012](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/7aee4ccb-b669-42fd-a1b5-2005003d5f8d)
(latent count: 64, view_length: 16, view_overlap: 4, Context Options◆Standard Static, context_length: 32, context_overlap: 8, total steps: 20) | + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/4b22c73f-99cb-4781-bd33-e1b3db848207) | 🟦*view_length*: Amount of latents in context to pass into motion model at a time.
🟦*view_overlap*: Minimum common latents between adjacent windows.
🟦*fuse_method*: Method for averaging results of windows.
| + +### View Options◆Standard Uniform +| Behavior | +|---| +| ![anim__00015](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/faa2cd26-9f94-4fce-90b2-8acec84b444e )
(latent count: 64, view_length: 16, view_overlap: 4, view_stride: 1, Context Options◆Standard Static, context_length: 32, context_overlap: 8, total steps: 20) | + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/bbf017e6-3545-4043-ba41-fcbe2f54496a) | 🟦*view_length*: Amount of latents in context to pass into motion model at a time.
🟦*view_overlap*: Minimum common latents between adjacent windows.
🟦*view_stride*: Maximum 2^(stride-1) distance between adjacent latents.
🟦*fuse_method*: Method for averaging results of windows.
| + +### View Options◆Looped Uniform +| Behavior | +|---| +| ![anim__00016](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/8922b44b-cb19-4b2a-8486-2df8a46bf573)
(latent count: 64, view_length: 16, view_overlap: 4, view_stride: 1, closed_loop: False, Context Options◆Standard Static, context_length: 32, context_overlap: 8, total steps: 20) | +| NOTE: this one is probably not going to come out looking well unless you are using this for a very specific reason. | + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c58fe4d4-81a8-436b-8028-9e81c2ace18a) | 🟦*view_length*: Amount of latents in context to pass into motion model at a time.
🟦*view_overlap*: Minimum common latents between adjacent windows.
🟦*view_stride*: Maximum 2^(stride-1) distance between adjacent latents.
🟦*closed_loop*: When True, adds additional windows to enhance looping.
🟦*use_on_equal_length*: When True, allows context to be used when latent count matches context_length - allows loops to be made when latent count == context_length.
🟦*fuse_method*: Method for averaging results of windows.
| + +## Sample Settings + +The Sample Settings node allows customization of the sampling process beyond what is exposed on most KSampler nodes. With its default values, it will NOT have any effect, and can safely be attached without changing any behavior. + +TL;DR To use FreeNoise, select ```FreeNoise``` from the noise_type dropdown. FreeNoise does not decrease performance in any way. To use FreeInit, attach the FreeInit Iteration Options to the iteration_opts input. NOTE: FreeInit, despite it's name, works by resampling the latents ```iterations``` amount of times - this means if you use iteration=2, total sampling time will be exactly twice as slow since it will be performing the sampling twice. + +Noise Layers with the inputs of the same name (or very close to same name) have same intended behavior as the ones for Sample Settings - refer to the inputs below. + +| Node | +|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/563a13cf-7aed-4acc-9ce3-1556660a34c2) | + +### Inputs +- 🟨*noise_layers*: Customizable, stackable noise to add to/modify initial noise. +- 🟨*iteration_opts*: Options for determining if (and how) sampling should be repeated consecutively; if you want to check out FreeInit, this is how to use it. +- 🟨*seed_override*: Accepts a single int to use a seed instead of the seed passed into the KSampler, or a list of ints (like via FizzNodes' BatchedValueSchedule) to assign individual seeds to each latent in the batch. +- 🟦*seed_offset*: When not set to 0, adds value to current seed, predictably changing it, whatever the original seed may have been. +- 🟦*batch_offset*: When not set to 0, will 'offset' the noise as if the first latent was actually the batch_offset-nth latent, shifting all the noises over. +- 🟦*noise_type*: Selects type of noise to be generated. Values include: + - **default**: generates different noise for all latents as usual. + - **constant**: generates exact same noise for all latents (based on seed). + - **empty**: generates no noise for all latents (as if noise was turned off). + - **repeated_context**: repeats noise every context_length (or view_length) amount of latents; stabilizes longer generations, but has very obvious repetition. + - **FreeNoise**: repeats noise such that it is repeated every context_length (or view_length), but the overlapped noise between contexts/views is shuffled to make repetition less prevelant while still achieving stabilization. +- 🟦*seed_gen*: Allows choosing between ComfyUI and Auto1111 methods of noise generation. One is not better than the other (noise distributions are the same), they are just different methods. + - **comfy**: Noise is generated for the entire latent batch tensor at once based on the provided seed. + - **auto1111**: Noise is generated individually for each latent, with each latent receiving an increasing +1 seed offset (first latent uses seed, second latent uses seed+1, etc.). +- 🟦*adapt_denoise_steps*: When True, KSamplers with a 'denoise' input will automatically scale down the total steps to run like the default options in Auto1111. + - **True**: Steps will decrease with lower denoise, i.e. 20 steps with 0.5 denoise will be 10 total steps executed, but sigmas will be selected that still achieve 0.5 denoise. Trades speed for quality (since less steps are sampled). + - **False**: Default behavior; 20 steps with 0.5 denoise will execute 20 steps. + + +## Iteration Options + +These options allow KSamplers to re-sample the same latents without needing to chain multiple KSamplers together, and also allows specialized iteration behavior to implement features such as FreeInit. + +### Default Iteration Options + +Simply re-runs the KSampler, plugging in the output of the previous iteration into the next one. At the dafault iterations=1, it is no different than not having this node plugged in at all. + +| Node | Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/23c5e698-6eff-43cc-92e9-488e9b5ca96a) | 🟦*iterations*: Total amount of times KSampler should run back-to-back.
🟦*iter_batch_offset*: batch_offset to apply on each subsequent iteration.
🟦*iter_seed_offset*: seed_offset to apply on each subsequent iteration. | + +### FreeInit Iteration Options + +Implements [FreeInit](https://github.com/TianxingWu/FreeInit), which is the idea that AnimateDiff was trained on latents of existing videos (images with temporal coherence between them) that were then noised rather than from random initial noise, and that when noising existing latents, low-frequency data still remains in the noised latents. It combines the low-frequency noise from existing videos (or, as is the default behavior, the previous iteration) with the high-frequency noise in randomly generated noise to run the subsequent iterations. ***Each iteration is a full sample - 2 iterations means it will take twice as long to run as compared to having 1 iteration/no iteration_opts connected.*** + +When apply_to_1st_iter is False, the noising/low-freq/high-freq combination will not occur on the first iteration, with the assumption that there are no useful latents passed in to do the noise combining in the first place, thus requiring at least 2 iterations for FreeInit to take effect. + +If you have an existing set of latents to use to get low-freq noise from, you may set apply_to_1st_iter to True, and then even if you set iterations=1, FreeInit will still take effect. + +| Node | +|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/21404e4f-ab67-44ed-8bf9-e510bc2571de) | + +#### Inputs +- 🟦*iterations*: Total amount of times KSampler should run back-to-back. Refer to explanation above why it is 2 by default (and when it can be set to 1 instead). +- 🟦*init_type*: Code implementation for applying FreeInit. + - ***FreeInit [sampler sigma]***: likely closest to intended implementation, and gets the sigma for noising from the sampler instead of the model (when possible). + - ***FreeInit [model sigma]***: gets sigma for noising from the model; when using Custom KSampler, this is the method that will be used for both FreeInit options. + - ***DinkInit_v1***: my initial, flawed implementation of FreeInit before I figured out how to exactly copy the noising behavior. By sheer luck and trial and error, I managed to have it actually sort of work with this method. Mainly for backwards compatibility now, but might produce useful results too. + +- 🟦*apply_to_1st_iter*: When set to True, will do FreeInit low-freq/high-freq combo work even on the 1st iteration it runs Refer to explanation in the above FreeInit Iteration Options section for when this can be set to True. +- 🟦*init_type*: Code implementation for applying FreeInit. +- 🟦*iter_batch_offset*: batch_offset to apply on each subsequent iteration. +- 🟦*iter_seed_offset*: seed_offset to apply on each subsequent iteration. Defaults to 1 so that new random noise is used for each iteration. + +- 🟦*filter*: Determines low-freq filter to apply to noise. Very technical, look into code/online resources to figure out how the individual filters act. +- 🟦*d_s*: Spatial parameter of filter (within latents, I think); very technical. Look into code/online resources if you wish to know what exactly it does. +- 🟦*d_t*: Temporal parameter of filter (across latents, I think); very technical. Look into code/online resources if you wish to know what exactly it does. +- 🟦*n_butterworth*: Only applies to ```butterworth``` filter; very technical. Look into code/online resources if you wish to know what exactly it does. +- 🟦*sigma_step*: Noising step to use/emulate when noising latents to then get low-freq noise out of. 999 actually means last (-1), and any number under 999 will mean the distance away from last. Leave at 999 unless you know what you're trying to do with it. + + +## Noise Layers + +These nodes allow initial noise to be added onto, weighted, or replaced. In near future, I will add the ability for masks to 'move' the noise relative to the masks' movement instead of just 'cutting and pasting' the noise. + +The inputs that are shared with Sample Settings have the same exact effect - only new option is in seed_gen_override, which by default will use same seed_gen as Sample Settings (use existing). You can make a noise layer use a different seed_gen strategy at will, or use a different seed/set of seeds, etc. + +The ```mask_optional``` parameter determines where on the initial noise the noise layer should be applied. + +| Node | Behavior + Inputs | +|---|---| +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/66487969-669d-47d3-9742-85ae26606903) | [Add]; Adds noise directly on top.
🟦*noise_weight*: Multiplier for noise layer before being added on top. | +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/52acb25c-9116-4594-b3fb-01b7b15bb79d) | [Add Weighted]; Adds noise, but takes a weighted average between what is already there and itself.
🟦*noise_weight*: Weight of new noise in the weighted average with existing noise.
🟦*balance_multipler*: Scale for how much noise_weight should affect existing noise; 1.0 means normal weighted average, and below 1.0 will lessen the weighted reduction by that amount (i.e. if balance_multiplier is set to 0.5 and noise_weight is 0.25, existing noise will only be reduced by 0.125 instead of 0.25, but new noise will be added with the unmodified 0.25 weight). | +| ![image](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/4feb586e-9920-4f35-8f92-e2e36fabb2df) | [Replace]; Directly replaces existing noise from layers underneath with itself. | + + +# Samples (download or drag images of the workflows into ComfyUI to instantly load the corresponding workflows!) + +NOTE: I've scaled down the gifs to 0.75x size to make them take up less space on the README. + +### txt2img + +| Result | +|---| +| ![readme_00006](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/b615a4aa-db3e-4b24-b88f-b694e52f6364) | +| Workflow | +| ![t2i_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/6eb47506-b503-482b-9baf-4c238f30a9c2) | + +### txt2img - (prompt travel) + +| Result | +|---| +| ![readme_00010](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c27a2029-2c69-4272-b40f-64408e9e2ea6) | +| Workflow | +| ![t2i_prompttravel_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/e5a72ea1-628d-423e-98ed-f20e1bcc5320) | + + + +### txt2img - 48 frame animation with 16 context_length (Context Options◆Standard Static) + FreeNoise + +| Result | +|---| +| ![readme_00012](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/684f6e79-d653-482f-899a-1900dc56cd8f) | +| Workflow | +| ![t2i_context_freenoise_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/9d0e53fa-49d6-483d-a660-3f41d7451002) | + + +# Old Samples (TODO: update all of these + add new ones when I get sleep) + +### txt2img - 32 frame animation with 16 context_length (uniform) - PanLeft and ZoomOut Motion LoRAs + +![t2i_context_mlora_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/41ec4141-389c-4ef4-ae3e-a963a0fa841f) + +![aaa_readme_00094_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/14abee9a-5500-4d14-8632-15ac77ba5709) + +[aaa_readme_00095_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/d730ae2e-188c-4a61-8a6d-bd48f60a2d07) + + +### txt2img w/ latent upscale (partial denoise on upscale) + +![t2i_lat_ups_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/521991dd-8e39-4fed-9970-514507c75067) + +![aaa_readme_up_00001_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/f4199e25-c839-41ed-8986-fb7dbbe2ac52) + +[aaa_readme_up_00002_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/2f44342f-3fd8-4863-8e3d-360377d608b7) + + + +### txt2img w/ latent upscale (partial denoise on upscale) - PanLeft and ZoomOut Motion LoRAs + +![t2i_mlora_lat_ups_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/f34882de-7dd4-4264-8f59-e24da350be2a) + +![aaa_readme_up_00023_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/e2ca5c0c-b5d9-42de-b877-4ed29db81eb9) + +[aaa_readme_up_00024_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/414c16d8-231c-422f-8dfc-a93d4b68ffcc) + + + +### txt2img w/ latent upscale (partial denoise on upscale) - 48 frame animation with 16 context_length (uniform) + +![t2i_lat_ups_full_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/a1ebc14e-853e-4cda-9cda-9a7553fa3d85) + +[aaa_readme_up_00009_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/f7a45f81-e700-4bfe-9fdd-fbcaa4fa8a4e) + + + +### txt2img w/ latent upscale (full denoise on upscale) + +![t2i_lat_ups_full_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/5058f201-3f52-4c48-ac7e-525c3c8f3df3) + +![aaa_readme_up_00010_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/804610de-18ec-43af-9af2-4a83cf31d16b) + +[aaa_readme_up_00012_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/3eb575cf-92dd-434a-b3db-1a2064ff0033) + + + +### txt2img w/ latent upscale (full denoise on upscale) - 48 frame animation with 16 context_length (uniform) + +![t2i_context_lat_ups_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/7b9ec22b-d4e0-4083-9846-5743ed90583e) + +[aaa_readme_up_00014_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/034aff4c-f814-4b87-b5d1-407b1089af0d) + + + +### txt2img w/ ControlNet-stabilized latent-upscale (partial denoise on upscale, Scaled Soft ControlNet Weights) + +![t2i_lat_ups_softcontrol_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c769c2bd-5aac-48d0-92b7-d73c422d4863) + +![aaa_readme_up_00017_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/221954cc-95df-4e0c-8ec9-266d0108dad4) + +[aaa_readme_up_00019_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/b562251d-a4fb-4141-94dd-9f8bca9f3ce8) + + + +### txt2img w/ ControlNet-stabilized latent-upscale (partial denoise on upscale, Scaled Soft ControlNet Weights) 48 frame animation with 16 context_length (uniform) + +![t2i_context_lat_ups_softcontrol_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/798567a8-4ef0-4814-aeeb-4f770df8d783) + +[aaa_readme_up_00003_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/0f57c949-0af3-4da4-b7c4-5c1fb1549927) + + + +### txt2img w/ Initial ControlNet input (using Normal LineArt preprocessor on first txt2img as an example) + +![t2i_initcn_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/caa7abdf-7ba0-456c-9fa4-547944ea6e72) + +![aaa_readme_cn_00002_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/055ef87c-50c6-4bb9-b35e-dd97916b47cc) + +[aaa_readme_cn_00003_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/9c9d425d-2378-4af0-8464-2c6c0d1a68bf) + + + +### txt2img w/ Initial ControlNet input (using Normal LineArt preprocessor on first txt2img 48 frame as an example) 48 frame animation with 16 context_length (uniform) + +![t2i_context_initcn_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/f9de2711-dcfd-4fea-8b3b-31e3794fbff9) + +![aaa_readme_cn_00005_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/6bf14361-5b09-4305-b2a7-f7babad4bd14) + +[aaa_readme_cn_00006_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/5d3665b7-c2da-46a1-88d8-ab43ba8eb0c6) + + + +### txt2img w/ Initial ControlNet input (using OpenPose images) + latent upscale w/ full denoise + +![t2i_openpose_upscale_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/306a40c4-0591-496d-a320-c33f0fc4b3d2) + +(open_pose images provided courtesy of toyxyz) + +![AA_openpose_cn_gif_00001_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff/assets/7365912/23291941-864d-495a-8ba8-d02e05756396) + +![aaa_readme_cn_00032_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/621a2ca6-2f08-4ed1-96ad-8e6635303173) + +[aaa_readme_cn_00033_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/c5df09a5-8c64-4811-9ecf-57ac73d82377) + + + +### txt2img w/ Initial ControlNet input (using OpenPose images) + latent upscale w/ full denoise, 48 frame animation with 16 context_length (uniform) + +![t2i_context_openpose_upscale_wf](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/a931af6f-bf6a-40d3-bd55-1d7bad32e665) + +(open_pose images provided courtesy of toyxyz) + +![aaa_readme_preview_00002_](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/028a1e9e-37b5-477d-8665-0e8723306d65) + +[aaa_readme_cn_00024_.webm](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/assets/7365912/8f4c840c-06a2-4c64-b97e-568dd5ff6f46) + + + +### img2img + +TODO: fill this out with a few useful ways, some using control net tile. I'm sorry there is nothing here right now, I have a lot of code to write. I'll try to fill this section out + Advance ControlNet use piece by piece. + + + +## Known Issues + +### Some motion models have visible watermark on resulting images (especially when using mm_sd_v15) + +Training data used by the authors of the AnimateDiff paper contained Shutterstock watermarks. Since mm_sd_v15 was finetuned on finer, less drastic movement, the motion module attempts to replicate the transparency of that watermark and does not get blurred away like mm_sd_v14. Using other motion modules, or combinations of them using Advanced KSamplers should alleviate watermark issues. diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/__init__.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a5635e38ffbc0d461d83e4f41d7977612a96f088 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/__init__.py @@ -0,0 +1,10 @@ +import folder_paths +from .animatediff.logger import logger +from .animatediff.utils_model import get_available_motion_models, Folders +from .animatediff.nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS + +if len(get_available_motion_models()) == 0: + logger.error(f"No motion models found. Please download one and place in: {folder_paths.get_folder_paths(Folders.ANIMATEDIFF_MODELS)}") + +WEB_DIRECTORY = "./web" +__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"] diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/ad_settings.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/ad_settings.py new file mode 100644 index 0000000000000000000000000000000000000000..f75c2a0c2ebd5f4ab3228beb11a6861825232d60 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/ad_settings.py @@ -0,0 +1,143 @@ +from torch import Tensor + +from .utils_motion import normalize_min_max + + +class AnimateDiffSettings: + def __init__(self, + adjust_pe: 'AdjustPEGroup'=None, + pe_strength: float=1.0, + attn_strength: float=1.0, + attn_q_strength: float=1.0, + attn_k_strength: float=1.0, + attn_v_strength: float=1.0, + attn_out_weight_strength: float=1.0, + attn_out_bias_strength: float=1.0, + other_strength: float=1.0, + attn_scale: float=1.0, + mask_attn_scale: Tensor=None, + mask_attn_scale_min: float=1.0, + mask_attn_scale_max: float=1.0, + ): + # PE-interpolation settings + self.adjust_pe = adjust_pe if adjust_pe is not None else AdjustPEGroup() + # general strengths + self.pe_strength = pe_strength + self.attn_strength = attn_strength + self.other_strength = other_strength + # specific attn strengths + self.attn_q_strength = attn_q_strength + self.attn_k_strength = attn_k_strength + self.attn_v_strength = attn_v_strength + self.attn_out_weight_strength = attn_out_weight_strength + self.attn_out_bias_strength = attn_out_bias_strength + # attention scale settings - DEPRECATED + self.attn_scale = attn_scale + # attention scale mask settings - DEPRECATED + self.mask_attn_scale = mask_attn_scale.clone() if mask_attn_scale is not None else mask_attn_scale + self.mask_attn_scale_min = mask_attn_scale_min + self.mask_attn_scale_max = mask_attn_scale_max + self._prepare_mask_attn_scale() + + def _prepare_mask_attn_scale(self): + if self.mask_attn_scale is not None: + self.mask_attn_scale = normalize_min_max(self.mask_attn_scale, self.mask_attn_scale_min, self.mask_attn_scale_max) + + def has_mask_attn_scale(self) -> bool: + return self.mask_attn_scale is not None + + def has_pe_strength(self) -> bool: + return self.pe_strength != 1.0 + + def has_attn_strength(self) -> bool: + return self.attn_strength != 1.0 + + def has_other_strength(self) -> bool: + return self.other_strength != 1.0 + + def has_anything_to_apply(self) -> bool: + return self.adjust_pe.has_anything_to_apply() \ + or self.has_pe_strength() \ + or self.has_attn_strength() \ + or self.has_other_strength() \ + or self.has_any_attn_sub_strength() + + def has_any_attn_sub_strength(self) -> bool: + return self.has_attn_q_strength() \ + or self.has_attn_k_strength() \ + or self.has_attn_v_strength() \ + or self.has_attn_out_weight_strength() \ + or self.has_attn_out_bias_strength() + + def has_attn_q_strength(self) -> bool: + return self.attn_q_strength != 1.0 + + def has_attn_k_strength(self) -> bool: + return self.attn_k_strength != 1.0 + + def has_attn_v_strength(self) -> bool: + return self.attn_v_strength != 1.0 + + def has_attn_out_weight_strength(self) -> bool: + return self.attn_out_weight_strength != 1.0 + + def has_attn_out_bias_strength(self) -> bool: + return self.attn_out_bias_strength != 1.0 + + +class AdjustPE: + def __init__(self, + cap_initial_pe_length: int=0, interpolate_pe_to_length: int=0, + initial_pe_idx_offset: int=0, final_pe_idx_offset: int=0, + motion_pe_stretch: int=0, print_adjustment=False): + # PE-interpolation settings + self.cap_initial_pe_length = cap_initial_pe_length + self.interpolate_pe_to_length = interpolate_pe_to_length + self.initial_pe_idx_offset = initial_pe_idx_offset + self.final_pe_idx_offset = final_pe_idx_offset + self.motion_pe_stretch = motion_pe_stretch + self.print_adjustment = print_adjustment + + def has_cap_initial_pe_length(self) -> bool: + return self.cap_initial_pe_length > 0 + + def has_interpolate_pe_to_length(self) -> bool: + return self.interpolate_pe_to_length > 0 + + def has_initial_pe_idx_offset(self) -> bool: + return self.initial_pe_idx_offset > 0 + + def has_final_pe_idx_offset(self) -> bool: + return self.final_pe_idx_offset > 0 + + def has_motion_pe_stretch(self) -> bool: + return self.motion_pe_stretch > 0 + + def has_anything_to_apply(self) -> bool: + return self.has_cap_initial_pe_length() \ + or self.has_interpolate_pe_to_length() \ + or self.has_initial_pe_idx_offset() \ + or self.has_final_pe_idx_offset() \ + or self.has_motion_pe_stretch() + + +class AdjustPEGroup: + def __init__(self, initial: AdjustPE=None): + self.adjusts: list[AdjustPE] = [] + if initial is not None: + self.add(initial) + + def add(self, adjust_pe: AdjustPE): + self.adjusts.append(adjust_pe) + + def has_anything_to_apply(self): + for adjust in self.adjusts: + if adjust.has_anything_to_apply(): + return True + return False + + def clone(self): + new_group = AdjustPEGroup() + for adjust in self.adjusts: + new_group.add(adjust) + return new_group diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/context.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/context.py new file mode 100644 index 0000000000000000000000000000000000000000..accc2070c535096d46d1b8d8dfa7d65210287890 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/context.py @@ -0,0 +1,389 @@ +from typing import Callable, Optional, Union + +import numpy as np +from torch import Tensor + +from comfy.model_base import BaseModel + +from .utils_motion import get_sorted_list_via_attr + +class ContextFuseMethod: + FLAT = "flat" + PYRAMID = "pyramid" + RELATIVE = "relative" + + LIST = [PYRAMID, FLAT] + LIST_STATIC = [PYRAMID, RELATIVE, FLAT] + + +class ContextType: + UNIFORM_WINDOW = "uniform window" + + +class ContextOptions: + def __init__(self, context_length: int=None, context_stride: int=None, context_overlap: int=None, + context_schedule: str=None, closed_loop: bool=False, fuse_method: str=ContextFuseMethod.FLAT, + use_on_equal_length: bool=False, view_options: 'ContextOptions'=None, + start_percent=0.0, guarantee_steps=1): + # permanent settings + self.context_length = context_length + self.context_stride = context_stride + self.context_overlap = context_overlap + self.context_schedule = context_schedule + self.closed_loop = closed_loop + self.fuse_method = fuse_method + self.sync_context_to_pe = False # this feature is likely bad and stay unused, so I might remove this + self.use_on_equal_length = use_on_equal_length + self.view_options = view_options.clone() if view_options else view_options + # scheduling + self.start_percent = float(start_percent) + self.start_t = 999999999.9 + self.guarantee_steps = guarantee_steps + # temporary vars + self._step: int = 0 + + @property + def step(self): + return self._step + @step.setter + def step(self, value: int): + self._step = value + if self.view_options: + self.view_options.step = value + + def clone(self): + n = ContextOptions(context_length=self.context_length, context_stride=self.context_stride, + context_overlap=self.context_overlap, context_schedule=self.context_schedule, + closed_loop=self.closed_loop, fuse_method=self.fuse_method, + use_on_equal_length=self.use_on_equal_length, view_options=self.view_options, + start_percent=self.start_percent, guarantee_steps=self.guarantee_steps) + n.start_t = self.start_t + return n + + +class ContextOptionsGroup: + def __init__(self): + self.contexts: list[ContextOptions] = [] + self._current_context: ContextOptions = None + self._current_used_steps: int = 0 + self._current_index: int = 0 + self.step = 0 + + def reset(self): + self._current_context = None + self._current_used_steps = 0 + self._current_index = 0 + self.step = 0 + self._set_first_as_current() + + @classmethod + def default(cls): + def_context = ContextOptions() + new_group = ContextOptionsGroup() + new_group.add(def_context) + return new_group + + def add(self, context: ContextOptions): + # add to end of list, then sort + self.contexts.append(context) + self.contexts = get_sorted_list_via_attr(self.contexts, "start_percent") + self._set_first_as_current() + + def add_to_start(self, context: ContextOptions): + # add to start of list, then sort + self.contexts.insert(0, context) + self.contexts = get_sorted_list_via_attr(self.contexts, "start_percent") + self._set_first_as_current() + + def has_index(self, index: int) -> int: + return index >=0 and index < len(self.contexts) + + def is_empty(self) -> bool: + return len(self.contexts) == 0 + + def clone(self): + cloned = ContextOptionsGroup() + for context in self.contexts: + cloned.contexts.append(context) + cloned._set_first_as_current() + return cloned + + def initialize_timesteps(self, model: BaseModel): + for context in self.contexts: + context.start_t = model.model_sampling.percent_to_sigma(context.start_percent) + + def prepare_current_context(self, t: Tensor): + curr_t: float = t[0] + prev_index = self._current_index + # if met guaranteed steps, look for next context in case need to switch + if self._current_used_steps >= self._current_context.guarantee_steps: + # if has next index, loop through and see if need to switch + if self.has_index(self._current_index+1): + for i in range(self._current_index+1, len(self.contexts)): + eval_c = self.contexts[i] + # check if start_t is greater or equal to curr_t + # NOTE: t is in terms of sigmas, not percent, so bigger number = earlier step in sampling + if eval_c.start_t >= curr_t: + self._current_index = i + self._current_context = eval_c + self._current_used_steps = 0 + # if guarantee_steps greater than zero, stop searching for other keyframes + if self._current_context.guarantee_steps > 0: + break + # if eval_c is outside the percent range, stop looking further + else: + break + # update steps current context is used + self._current_used_steps += 1 + + def _set_first_as_current(self): + if len(self.contexts) > 0: + self._current_context = self.contexts[0] + + # properties shadow those of ContextOptions + @property + def context_length(self): + return self._current_context.context_length + + @property + def context_overlap(self): + return self._current_context.context_overlap + + @property + def context_stride(self): + return self._current_context.context_stride + + @property + def context_schedule(self): + return self._current_context.context_schedule + + @property + def closed_loop(self): + return self._current_context.closed_loop + + @property + def fuse_method(self): + return self._current_context.fuse_method + + @property + def use_on_equal_length(self): + return self._current_context.use_on_equal_length + + @property + def view_options(self): + return self._current_context.view_options + + +class ContextSchedules: + UNIFORM_LOOPED = "looped_uniform" + UNIFORM_STANDARD = "standard_uniform" + STATIC_STANDARD = "standard_static" + BATCHED = "batched" + VIEW_AS_CONTEXT = "view_as_context" + + LEGACY_UNIFORM_LOOPED = "uniform" + LEGACY_UNIFORM_SCHEDULE_LIST = [LEGACY_UNIFORM_LOOPED] + + +# from https://github.com/neggles/animatediff-cli/blob/main/src/animatediff/pipelines/context.py +def create_windows_uniform_looped(num_frames: int, opts: Union[ContextOptionsGroup, ContextOptions]): + windows = [] + if num_frames < opts.context_length: + windows.append(list(range(num_frames))) + return windows + + context_stride = min(opts.context_stride, int(np.ceil(np.log2(num_frames / opts.context_length))) + 1) + # obtain uniform windows as normal, looping and all + for context_step in 1 << np.arange(context_stride): + pad = int(round(num_frames * ordered_halving(opts.step))) + for j in range( + int(ordered_halving(opts.step) * context_step) + pad, + num_frames + pad + (0 if opts.closed_loop else -opts.context_overlap), + (opts.context_length * context_step - opts.context_overlap), + ): + windows.append([e % num_frames for e in range(j, j + opts.context_length * context_step, context_step)]) + + return windows + + +def create_windows_uniform_standard(num_frames: int, opts: Union[ContextOptionsGroup, ContextOptions]): + # unlike looped, uniform_straight does NOT allow windows that loop back to the beginning; + # instead, they get shifted to the corresponding end of the frames. + # in the case that a window (shifted or not) is identical to the previous one, it gets skipped. + windows = [] + if num_frames <= opts.context_length: + windows.append(list(range(num_frames))) + return windows + + context_stride = min(opts.context_stride, int(np.ceil(np.log2(num_frames / opts.context_length))) + 1) + # first, obtain uniform windows as normal, looping and all + for context_step in 1 << np.arange(context_stride): + pad = int(round(num_frames * ordered_halving(opts.step))) + for j in range( + int(ordered_halving(opts.step) * context_step) + pad, + num_frames + pad + (-opts.context_overlap), + (opts.context_length * context_step - opts.context_overlap), + ): + windows.append([e % num_frames for e in range(j, j + opts.context_length * context_step, context_step)]) + + # now that windows are created, shift any windows that loop, and delete duplicate windows + delete_idxs = [] + win_i = 0 + while win_i < len(windows): + # if window is rolls over itself, need to shift it + is_roll, roll_idx = does_window_roll_over(windows[win_i], num_frames) + if is_roll: + roll_val = windows[win_i][roll_idx] # roll_val might not be 0 for windows of higher strides + shift_window_to_end(windows[win_i], num_frames=num_frames) + # check if next window (cyclical) is missing roll_val + if roll_val not in windows[(win_i+1) % len(windows)]: + # need to insert new window here - just insert window starting at roll_val + windows.insert(win_i+1, list(range(roll_val, roll_val + opts.context_length))) + # delete window if it's not unique + for pre_i in range(0, win_i): + if windows[win_i] == windows[pre_i]: + delete_idxs.append(win_i) + break + win_i += 1 + + # reverse delete_idxs so that they will be deleted in an order that doesn't break idx correlation + delete_idxs.reverse() + for i in delete_idxs: + windows.pop(i) + + return windows + + +def create_windows_static_standard(num_frames: int, opts: Union[ContextOptionsGroup, ContextOptions]): + windows = [] + if num_frames <= opts.context_length: + windows.append(list(range(num_frames))) + return windows + # always return the same set of windows + delta = opts.context_length - opts.context_overlap + for start_idx in range(0, num_frames, delta): + # if past the end of frames, move start_idx back to allow same context_length + ending = start_idx + opts.context_length + if ending >= num_frames: + final_delta = ending - num_frames + final_start_idx = start_idx - final_delta + windows.append(list(range(final_start_idx, final_start_idx + opts.context_length))) + break + windows.append(list(range(start_idx, start_idx + opts.context_length))) + return windows + + +def create_windows_batched(num_frames: int, opts: Union[ContextOptionsGroup, ContextOptions]): + windows = [] + if num_frames <= opts.context_length: + windows.append(list(range(num_frames))) + return windows + # always return the same set of windows; + # no overlap, just cut up based on context_length; + # last window size will be different if num_frames % opts.context_length != 0 + for start_idx in range(0, num_frames, opts.context_length): + windows.append(list(range(start_idx, min(start_idx + opts.context_length, num_frames)))) + return windows + + +def create_windows_default(num_frames: int, opts: Union[ContextOptionsGroup, ContextOptions]): + return [list(range(num_frames))] + + +def get_context_windows(num_frames: int, opts: Union[ContextOptionsGroup, ContextOptions]): + context_func = CONTEXT_MAPPING.get(opts.context_schedule, None) + if not context_func: + raise ValueError(f"Unknown context_schedule '{opts.context_schedule}'.") + return context_func(num_frames, opts) + + +CONTEXT_MAPPING = { + ContextSchedules.UNIFORM_LOOPED: create_windows_uniform_looped, + ContextSchedules.UNIFORM_STANDARD: create_windows_uniform_standard, + ContextSchedules.STATIC_STANDARD: create_windows_static_standard, + ContextSchedules.BATCHED: create_windows_batched, + ContextSchedules.VIEW_AS_CONTEXT: create_windows_default, # just return all to allow Views to do all the work +} + + +def get_context_weights(num_frames: int, fuse_method: str): + weights_func = FUSE_MAPPING.get(fuse_method, None) + if not weights_func: + raise ValueError(f"Unknown fuse_method '{fuse_method}'.") + return weights_func(num_frames) + + +def create_weights_flat(length: int, **kwargs) -> list[float]: + # weight is the same for all + return [1.0] * length + + +def create_weights_pyramid(length: int, **kwargs) -> list[float]: + # weight is based on the distance away from the edge of the context window; + # based on weighted average concept in FreeNoise paper + if length % 2 == 0: + max_weight = length // 2 + weight_sequence = list(range(1, max_weight + 1, 1)) + list(range(max_weight, 0, -1)) + else: + max_weight = (length + 1) // 2 + weight_sequence = list(range(1, max_weight, 1)) + [max_weight] + list(range(max_weight - 1, 0, -1)) + return weight_sequence + + +FUSE_MAPPING = { + ContextFuseMethod.FLAT: create_weights_flat, + ContextFuseMethod.PYRAMID: create_weights_pyramid, + ContextFuseMethod.RELATIVE: create_weights_pyramid, +} + + +# Returns fraction that has denominator that is a power of 2 +def ordered_halving(val): + # get binary value, padded with 0s for 64 bits + bin_str = f"{val:064b}" + # flip binary value, padding included + bin_flip = bin_str[::-1] + # convert binary to int + as_int = int(bin_flip, 2) + # divide by 1 << 64, equivalent to 2**64, or 18446744073709551616, + # or b10000000000000000000000000000000000000000000000000000000000000000 (1 with 64 zero's) + return as_int / (1 << 64) + + +def get_missing_indexes(windows: list[list[int]], num_frames: int) -> list[int]: + all_indexes = list(range(num_frames)) + for w in windows: + for val in w: + try: + all_indexes.remove(val) + except ValueError: + pass + return all_indexes + + +def does_window_roll_over(window: list[int], num_frames: int) -> tuple[bool, int]: + prev_val = -1 + for i, val in enumerate(window): + val = val % num_frames + if val < prev_val: + return True, i + prev_val = val + return False, -1 + + +def shift_window_to_start(window: list[int], num_frames: int): + start_val = window[0] + for i in range(len(window)): + # 1) subtract each element by start_val to move vals relative to the start of all frames + # 2) add num_frames and take modulus to get adjusted vals + window[i] = ((window[i] - start_val) + num_frames) % num_frames + + +def shift_window_to_end(window: list[int], num_frames: int): + # 1) shift window to start + shift_window_to_start(window, num_frames) + end_val = window[-1] + end_delta = num_frames - end_val - 1 + for i in range(len(window)): + # 2) add end_delta to each val to slide windows to end + window[i] = window[i] + end_delta diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/freeinit.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/freeinit.py new file mode 100644 index 0000000000000000000000000000000000000000..ac9edfab49bcce0858be06e66ed6c6eee2552d53 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/freeinit.py @@ -0,0 +1,162 @@ +# S-Lab License 1.0 + +# Copyright 2023 S-Lab +# Redistribution and use for non-commercial purpose in source and binary forms, with or without modification, are permitted provided that the following conditions are met: +# 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. +# 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# 4. In the event that redistribution and/or use for commercial purpose in source or binary forms, with or without modification is required, please contact the contributor(s) of the work. + +# Code has been modified from https://github.com/TianxingWu/FreeInit + +import torch +import torch.fft as fft +import math + + +class FreeInitFilter: + GAUSSIAN = "gaussian" + IDEAL = "ideal" + BOX = "box" + BUTTERWORTH = "butterworth" + + LIST = [GAUSSIAN, BUTTERWORTH, IDEAL, BOX] + + +def freq_mix_3d(x, noise, LPF): + """ + Noise reinitialization. + + Args: + x: diffused latent + noise: randomly sampled noise + LPF: low pass filter + """ + # FFT + x_freq = fft.fftn(x, dim=(-4, -2, -1)) + x_freq = fft.fftshift(x_freq, dim=(-4, -2, -1)) + noise_freq = fft.fftn(noise, dim=(-4, -2, -1)) + noise_freq = fft.fftshift(noise_freq, dim=(-4, -2, -1)) + + # frequency mix + HPF = 1 - LPF + x_freq_low = x_freq * LPF + noise_freq_high = noise_freq * HPF + x_freq_mixed = x_freq_low + noise_freq_high # mix in freq domain + + # IFFT + x_freq_mixed = fft.ifftshift(x_freq_mixed, dim=(-4, -2, -1)) + x_mixed = fft.ifftn(x_freq_mixed, dim=(-4, -2, -1)).real + + return x_mixed + + +def get_freq_filter(shape, device, filter_type, n, d_s, d_t): + """ + Form the frequency filter for noise reinitialization. + + Args: + shape: shape of latent (T, C, H, W) + filter_type: type of the freq filter + n: (only for butterworth) order of the filter, larger n ~ ideal, smaller n ~ gaussian + d_s: normalized stop frequency for spatial dimensions (0.0-1.0) + d_t: normalized stop frequency for temporal dimension (0.0-1.0) + """ + if filter_type == FreeInitFilter.GAUSSIAN: + return gaussian_low_pass_filter(shape=shape, d_s=d_s, d_t=d_t).to(device) + elif filter_type == FreeInitFilter.IDEAL: + return ideal_low_pass_filter(shape=shape, d_s=d_s, d_t=d_t).to(device) + elif filter_type == FreeInitFilter.BOX: + return box_low_pass_filter(shape=shape, d_s=d_s, d_t=d_t).to(device) + elif filter_type == FreeInitFilter.BUTTERWORTH: + return butterworth_low_pass_filter(shape=shape, n=n, d_s=d_s, d_t=d_t).to(device) + else: + raise NotImplementedError + +def gaussian_low_pass_filter(shape, d_s=0.25, d_t=0.25): + """ + Compute the gaussian low pass filter mask. + + Args: + shape: shape of the filter (volume) + d_s: normalized stop frequency for spatial dimensions (0.0-1.0) + d_t: normalized stop frequency for temporal dimension (0.0-1.0) + """ + T, H, W = shape[-4], shape[-2], shape[-1] + mask = torch.zeros(shape) + if d_s==0 or d_t==0: + return mask + for t in range(T): + for h in range(H): + for w in range(W): + d_square = (((d_s/d_t)*(2*t/T-1))**2 + (2*h/H-1)**2 + (2*w/W-1)**2) + mask[t, ..., h,w] = math.exp(-1/(2*d_s**2) * d_square) + return mask + + +def butterworth_low_pass_filter(shape, n=4, d_s=0.25, d_t=0.25): + """ + Compute the butterworth low pass filter mask. + + Args: + shape: shape of the filter (volume) + n: order of the filter, larger n ~ ideal, smaller n ~ gaussian + d_s: normalized stop frequency for spatial dimensions (0.0-1.0) + d_t: normalized stop frequency for temporal dimension (0.0-1.0) + """ + T, H, W = shape[-4], shape[-2], shape[-1] + mask = torch.zeros(shape) + if d_s==0 or d_t==0: + return mask + for t in range(T): + for h in range(H): + for w in range(W): + d_square = (((d_s/d_t)*(2*t/T-1))**2 + (2*h/H-1)**2 + (2*w/W-1)**2) + mask[t, ..., h,w] = 1 / (1 + (d_square / d_s**2)**n) + return mask + + +def ideal_low_pass_filter(shape, d_s=0.25, d_t=0.25): + """ + Compute the ideal low pass filter mask. + + Args: + shape: shape of the filter (volume) + d_s: normalized stop frequency for spatial dimensions (0.0-1.0) + d_t: normalized stop frequency for temporal dimension (0.0-1.0) + """ + T, H, W = shape[-4], shape[-2], shape[-1] + mask = torch.zeros(shape) + if d_s==0 or d_t==0: + return mask + for t in range(T): + for h in range(H): + for w in range(W): + d_square = (((d_s/d_t)*(2*t/T-1))**2 + (2*h/H-1)**2 + (2*w/W-1)**2) + mask[t, ...,h,w] = 1 if d_square <= d_s*2 else 0 + return mask + + +def box_low_pass_filter(shape, d_s=0.25, d_t=0.25): + """ + Compute the ideal low pass filter mask (approximated version). + + Args: + shape: shape of the filter (volume) + d_s: normalized stop frequency for spatial dimensions (0.0-1.0) + d_t: normalized stop frequency for temporal dimension (0.0-1.0) + """ + T, H, W = shape[-4], shape[-2], shape[-1] + mask = torch.zeros(shape) + if d_s==0 or d_t==0: + return mask + + threshold_s = round(int(H // 2) * d_s) + threshold_t = round(T // 2 * d_t) + + cframe, crow, ccol = T // 2, H // 2, W //2 + mask[cframe - threshold_t:cframe + threshold_t, ..., crow - threshold_s:crow + threshold_s, ccol - threshold_s:ccol + threshold_s] = 1.0 + + return mask + diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/logger.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/logger.py new file mode 100644 index 0000000000000000000000000000000000000000..09b171a5c9728ef8e1b812e2540872f51014ba1e --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/logger.py @@ -0,0 +1,36 @@ +import copy +import logging +import sys + + +class ColoredFormatter(logging.Formatter): + COLORS = { + "DEBUG": "\033[0;36m", # CYAN + "INFO": "\033[0;32m", # GREEN + "WARNING": "\033[0;33m", # YELLOW + "ERROR": "\033[0;31m", # RED + "CRITICAL": "\033[0;37;41m", # WHITE ON RED + "RESET": "\033[0m", # RESET COLOR + } + + def format(self, record): + colored_record = copy.copy(record) + levelname = colored_record.levelname + seq = self.COLORS.get(levelname, self.COLORS["RESET"]) + colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}" + return super().format(colored_record) + + +# Create a new logger +logger = logging.getLogger("AnimateDiffEvo") +logger.propagate = False + +# Add handler if we don't have one. +if not logger.handlers: + handler = logging.StreamHandler(sys.stdout) + handler.setFormatter(ColoredFormatter("[%(name)s] - %(levelname)s - %(message)s")) + logger.addHandler(handler) + +# Configure logger +loglevel = logging.INFO +logger.setLevel(loglevel) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py new file mode 100644 index 0000000000000000000000000000000000000000..3ad7a6196adfea51a76ab1ffa4b17c59197fda28 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py @@ -0,0 +1,581 @@ +import copy +from typing import Union + +from einops import rearrange +from torch import Tensor +import torch.nn.functional as F +import torch + +import comfy.model_management +import comfy.utils +from comfy.model_patcher import ModelPatcher +from comfy.model_base import BaseModel + +from .ad_settings import AnimateDiffSettings +from .context import ContextOptions, ContextOptions, ContextOptionsGroup +from .motion_module_ad import AnimateDiffModel, AnimateDiffFormat, has_mid_block, normalize_ad_state_dict +from .logger import logger +from .utils_motion import ADKeyframe, ADKeyframeGroup, MotionCompatibilityError, get_combined_multival, normalize_min_max +from .motion_lora import MotionLoraInfo, MotionLoraList +from .utils_model import get_motion_lora_path, get_motion_model_path, get_sd_model_type +from .sample_settings import SampleSettings, SeedNoiseGeneration + + +# some motion_model casts here might fail if model becomes metatensor or is not castable; +# should not really matter if it fails, so ignore raised Exceptions +class ModelPatcherAndInjector(ModelPatcher): + def __init__(self, m: ModelPatcher): + # replicate ModelPatcher.clone() to initialize ModelPatcherAndInjector + super().__init__(m.model, m.load_device, m.offload_device, m.size, m.current_device, weight_inplace_update=m.weight_inplace_update) + self.patches = {} + for k in m.patches: + self.patches[k] = m.patches[k][:] + + self.object_patches = m.object_patches.copy() + self.model_options = copy.deepcopy(m.model_options) + self.model_keys = m.model_keys + + # injection stuff + self.motion_injection_params: InjectionParams = None + self.sample_settings: SampleSettings = SampleSettings() + self.motion_models: MotionModelGroup = None + + def model_patches_to(self, device): + super().model_patches_to(device) + if self.motion_models is not None: + for motion_model in self.motion_models.models: + try: + motion_model.model.to(device) + except Exception: + pass + + def patch_model(self, device_to=None): + # first, perform model patching + patched_model = super().patch_model(device_to) + # finally, perform motion model injection + self.inject_model(device_to=device_to) + return patched_model + + def unpatch_model(self, device_to=None): + # first, eject motion model from unet + self.eject_model(device_to=device_to) + # finally, do normal model unpatching + return super().unpatch_model(device_to) + + def inject_model(self, device_to=None): + if self.motion_models is not None: + for motion_model in self.motion_models.models: + motion_model.model.inject(self) + try: + motion_model.model.to(device_to) + except Exception: + pass + + def eject_model(self, device_to=None): + if self.motion_models is not None: + for motion_model in self.motion_models.models: + motion_model.model.eject(self) + try: + motion_model.model.to(device_to) + except Exception: + pass + + def clone(self): + cloned = ModelPatcherAndInjector(self) + cloned.motion_models = self.motion_models.clone() if self.motion_models else self.motion_models + cloned.sample_settings = self.sample_settings + cloned.motion_injection_params = self.motion_injection_params.clone() if self.motion_injection_params else self.motion_injection_params + return cloned + + +class MotionModelPatcher(ModelPatcher): + # Mostly here so that type hints work in IDEs + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.model: AnimateDiffModel = self.model + self.timestep_percent_range = (0.0, 1.0) + self.timestep_range: tuple[float, float] = None + self.keyframes: ADKeyframeGroup = ADKeyframeGroup() + + self.scale_multival = None + self.effect_multival = None + # temporary variables + self.current_used_steps = 0 + self.current_keyframe: ADKeyframe = None + self.current_index = -1 + self.current_scale: Union[float, Tensor] = None + self.current_effect: Union[float, Tensor] = None + self.combined_scale: Union[float, Tensor] = None + self.combined_effect: Union[float, Tensor] = None + self.was_within_range = False + + def patch_model(self, *args, **kwargs): + # patch as normal, but prepare_weights so that lowvram meta device works properly + patched_model = super().patch_model(*args, **kwargs) + self.prepare_weights() + return patched_model + + def prepare_weights(self): + # in case lowvram is active and meta device is used, need to convert weights + # otherwise, will get exceptions thrown related to meta device + # TODO: with new comfy lowvram system, this is unnecessary + state_dict = self.model.state_dict() + for key in state_dict: + weight = comfy.model_management.resolve_lowvram_weight(state_dict[key], self.model, key) + try: + comfy.utils.set_attr(self.model, key, weight) + except Exception: + pass + + def pre_run(self, model: ModelPatcherAndInjector): + self.cleanup() + self.model.reset() + # just in case, prepare_weights before every run + self.prepare_weights() + self.model.set_scale(self.scale_multival) + self.model.set_effect(self.effect_multival) + + def initialize_timesteps(self, model: BaseModel): + self.timestep_range = (model.model_sampling.percent_to_sigma(self.timestep_percent_range[0]), + model.model_sampling.percent_to_sigma(self.timestep_percent_range[1])) + if self.keyframes is not None: + for keyframe in self.keyframes.keyframes: + keyframe.start_t = model.model_sampling.percent_to_sigma(keyframe.start_percent) + + def prepare_current_keyframe(self, t: Tensor): + curr_t: float = t[0] + prev_index = self.current_index + # if met guaranteed steps, look for next keyframe in case need to switch + if self.current_keyframe is None or self.current_used_steps >= self.current_keyframe.guarantee_steps: + # if has next index, loop through and see if need to switch + if self.keyframes.has_index(self.current_index+1): + for i in range(self.current_index+1, len(self.keyframes)): + eval_kf = self.keyframes[i] + # check if start_t is greater or equal to curr_t + # NOTE: t is in terms of sigmas, not percent, so bigger number = earlier step in sampling + if eval_kf.start_t >= curr_t: + self.current_index = i + self.current_keyframe = eval_kf + self.current_used_steps = 0 + # keep track of scale and effect multivals, accounting for inherit_missing + if self.current_keyframe.has_scale(): + self.current_scale = self.current_keyframe.scale_multival + elif not self.current_keyframe.inherit_missing: + self.current_scale = None + if self.current_keyframe.has_effect(): + self.current_effect = self.current_keyframe.effect_multival + elif not self.current_keyframe.inherit_missing: + self.current_effect = None + # if guarantee_steps greater than zero, stop searching for other keyframes + if self.current_keyframe.guarantee_steps > 0: + break + # if eval_kf is outside the percent range, stop looking further + else: + break + # if index changed, apply new combined values + if prev_index != self.current_index: + # combine model's scale and effect with keyframe's scale and effect + self.combined_scale = get_combined_multival(self.scale_multival, self.current_scale) + self.combined_effect = get_combined_multival(self.effect_multival, self.current_effect) + # apply scale and effect + self.model.set_scale(self.combined_scale) + self.model.set_effect(self.combined_effect) + # apply effect - if not within range, set effect to 0, effectively turning model off + if curr_t > self.timestep_range[0] or curr_t < self.timestep_range[1]: + self.model.set_effect(0.0) + self.was_within_range = False + else: + # if was not in range last step, apply effect to toggle AD status + if not self.was_within_range: + self.model.set_effect(self.combined_effect) + self.was_within_range = True + # update steps current keyframe is used + self.current_used_steps += 1 + + def cleanup(self): + if self.model is not None: + self.model.cleanup() + self.current_used_steps = 0 + self.current_keyframe = None + self.current_index = -1 + self.current_scale = None + self.current_effect = None + self.combined_scale = None + self.combined_effect = None + self.was_within_range = False + + def clone(self): + # normal ModelPatcher clone actions + n = MotionModelPatcher(self.model, self.load_device, self.offload_device, self.size, self.current_device, weight_inplace_update=self.weight_inplace_update) + n.patches = {} + for k in self.patches: + n.patches[k] = self.patches[k][:] + + n.object_patches = self.object_patches.copy() + n.model_options = copy.deepcopy(self.model_options) + n.model_keys = self.model_keys + # extra cloned params + n.timestep_percent_range = self.timestep_percent_range + n.timestep_range = self.timestep_range + n.keyframes = self.keyframes.clone() + n.scale_multival = self.scale_multival + n.effect_multival = self.effect_multival + return n + + +class MotionModelGroup: + def __init__(self, init_motion_model: MotionModelPatcher=None): + self.models: list[MotionModelPatcher] = [] + if init_motion_model is not None: + self.add(init_motion_model) + + def add(self, mm: MotionModelPatcher): + # add to end of list + self.models.append(mm) + + def add_to_start(self, mm: MotionModelPatcher): + self.models.insert(0, mm) + + def __getitem__(self, index) -> MotionModelPatcher: + return self.models[index] + + def is_empty(self) -> bool: + return len(self.models) == 0 + + def clone(self) -> 'MotionModelGroup': + cloned = MotionModelGroup() + for mm in self.models: + cloned.add(mm) + return cloned + + def set_sub_idxs(self, sub_idxs: list[int]): + for motion_model in self.models: + motion_model.model.set_sub_idxs(sub_idxs=sub_idxs) + + def set_view_options(self, view_options: ContextOptions): + for motion_model in self.models: + motion_model.model.set_view_options(view_options) + + def set_video_length(self, video_length: int, full_length: int): + for motion_model in self.models: + motion_model.model.set_video_length(video_length=video_length, full_length=full_length) + + def initialize_timesteps(self, model: BaseModel): + for motion_model in self.models: + motion_model.initialize_timesteps(model) + + def pre_run(self, model: ModelPatcherAndInjector): + for motion_model in self.models: + motion_model.pre_run(model) + + def prepare_current_keyframe(self, t: Tensor): + for motion_model in self.models: + motion_model.prepare_current_keyframe(t=t) + + def get_name_string(self, show_version=False): + identifiers = [] + for motion_model in self.models: + id = motion_model.model.mm_info.mm_name + if show_version: + id += f":{motion_model.model.mm_info.mm_version}" + identifiers.append(id) + return ", ".join(identifiers) + + +def get_vanilla_model_patcher(m: ModelPatcher) -> ModelPatcher: + model = ModelPatcher(m.model, m.load_device, m.offload_device, m.size, m.current_device, weight_inplace_update=m.weight_inplace_update) + model.patches = {} + for k in m.patches: + model.patches[k] = m.patches[k][:] + + model.object_patches = m.object_patches.copy() + model.model_options = copy.deepcopy(m.model_options) + model.model_keys = m.model_keys + return model + +# adapted from https://github.com/guoyww/AnimateDiff/blob/main/animatediff/utils/convert_lora_safetensor_to_diffusers.py +# Example LoRA keys: +# down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_q_lora.down.weight +# down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_q_lora.up.weight +# +# Example model keys: +# down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight +# +def load_motion_lora_as_patches(motion_model: MotionModelPatcher, lora: MotionLoraInfo) -> None: + def get_version(has_midblock: bool): + return "v2" if has_midblock else "v1" + + lora_path = get_motion_lora_path(lora.name) + logger.info(f"Loading motion LoRA {lora.name}") + state_dict = comfy.utils.load_torch_file(lora_path) + + # remove all non-temporal keys (in case model has extra stuff in it) + for key in list(state_dict.keys()): + if "temporal" not in key: + del state_dict[key] + if len(state_dict) == 0: + raise ValueError(f"'{lora.name}' contains no temporal keys; it is not a valid motion LoRA!") + + model_has_midblock = motion_model.model.mid_block != None + lora_has_midblock = has_mid_block(state_dict) + logger.info(f"Applying a {get_version(lora_has_midblock)} LoRA ({lora.name}) to a { motion_model.model.mm_info.mm_version} motion model.") + + patches = {} + # convert lora state dict to one that matches motion_module keys and tensors + for key in state_dict: + # if motion_module doesn't have a midblock, skip mid_block entries + if not model_has_midblock: + if "mid_block" in key: continue + # only process lora down key (we will process up at the same time as down) + if "up." in key: continue + + # get up key version of down key + up_key = key.replace(".down.", ".up.") + + # adapt key to match motion_module key format - remove 'processor.', '_lora', 'down.', and 'up.' + model_key = key.replace("processor.", "").replace("_lora", "").replace("down.", "").replace("up.", "") + # motion_module keys have a '0.' after all 'to_out.' weight keys + model_key = model_key.replace("to_out.", "to_out.0.") + + weight_down = state_dict[key] + weight_up = state_dict[up_key] + # actual weights obtained by matrix multiplication of up and down weights + # save as a tuple, so that (Motion)ModelPatcher's calculate_weight function detects len==1, applying it correctly + patches[model_key] = (torch.mm(weight_up, weight_down),) + del state_dict + # add patches to motion ModelPatcher + motion_model.add_patches(patches=patches, strength_patch=lora.strength) + + +def load_motion_module_gen1(model_name: str, model: ModelPatcher, motion_lora: MotionLoraList = None, motion_model_settings: AnimateDiffSettings = None) -> MotionModelPatcher: + model_path = get_motion_model_path(model_name) + logger.info(f"Loading motion module {model_name}") + mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) + # TODO: check for empty state dict? + # get normalized state_dict and motion model info + mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name) + # check that motion model is compatible with sd model + model_sd_type = get_sd_model_type(model) + if model_sd_type != mm_info.sd_type: + raise MotionCompatibilityError(f"Motion module '{mm_info.mm_name}' is intended for {mm_info.sd_type} models, " \ + + f"but the provided model is type {model_sd_type}.") + # apply motion model settings + mm_state_dict = apply_mm_settings(model_dict=mm_state_dict, mm_settings=motion_model_settings) + # initialize AnimateDiffModelWrapper + ad_wrapper = AnimateDiffModel(mm_state_dict=mm_state_dict, mm_info=mm_info) + ad_wrapper.to(model.model_dtype()) + ad_wrapper.to(model.offload_device) + is_animatelcm = mm_info.mm_format==AnimateDiffFormat.ANIMATELCM + load_result = ad_wrapper.load_state_dict(mm_state_dict, strict=not is_animatelcm) + # TODO: report load_result of motion_module loading? + # wrap motion_module into a ModelPatcher, to allow motion lora patches + motion_model = MotionModelPatcher(model=ad_wrapper, load_device=model.load_device, offload_device=model.offload_device) + # load motion_lora, if present + if motion_lora is not None: + for lora in motion_lora.loras: + load_motion_lora_as_patches(motion_model, lora) + return motion_model + + +def load_motion_module_gen2(model_name: str, motion_model_settings: AnimateDiffSettings = None) -> MotionModelPatcher: + model_path = get_motion_model_path(model_name) + logger.info(f"Loading motion module {model_name} via Gen2") + mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) + # TODO: check for empty state dict? + # get normalized state_dict and motion model info (converts alternate AD models like HotshotXL into AD keys) + mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name) + # apply motion model settings + mm_state_dict = apply_mm_settings(model_dict=mm_state_dict, mm_settings=motion_model_settings) + # initialize AnimateDiffModelWrapper + ad_wrapper = AnimateDiffModel(mm_state_dict=mm_state_dict, mm_info=mm_info) + ad_wrapper.to(comfy.model_management.unet_dtype()) + ad_wrapper.to(comfy.model_management.unet_offload_device()) + is_animatelcm = mm_info.mm_format==AnimateDiffFormat.ANIMATELCM + load_result = ad_wrapper.load_state_dict(mm_state_dict, strict=not is_animatelcm) + # TODO: manually check load_results for AnimateLCM models + if is_animatelcm: + pass + # TODO: report load_result of motion_module loading? + # wrap motion_module into a ModelPatcher, to allow motion lora patches + motion_model = MotionModelPatcher(model=ad_wrapper, load_device=comfy.model_management.get_torch_device(), + offload_device=comfy.model_management.unet_offload_device()) + return motion_model + + +def create_fresh_motion_module(motion_model: MotionModelPatcher) -> MotionModelPatcher: + ad_wrapper = AnimateDiffModel(mm_state_dict=motion_model.model.state_dict(), mm_info=motion_model.model.mm_info) + ad_wrapper.to(comfy.model_management.unet_dtype()) + ad_wrapper.to(comfy.model_management.unet_offload_device()) + ad_wrapper.load_state_dict(motion_model.model.state_dict()) + return MotionModelPatcher(model=ad_wrapper, load_device=comfy.model_management.get_torch_device(), + offload_device=comfy.model_management.unet_offload_device()) + + +def validate_model_compatibility_gen2(model: ModelPatcher, motion_model: MotionModelPatcher): + # check that motion model is compatible with sd model + model_sd_type = get_sd_model_type(model) + mm_info = motion_model.model.mm_info + if model_sd_type != mm_info.sd_type: + raise MotionCompatibilityError(f"Motion module '{mm_info.mm_name}' is intended for {mm_info.sd_type} models, " \ + + f"but the provided model is type {model_sd_type}.") + + +def interpolate_pe_to_length(model_dict: dict[str, Tensor], key: str, new_length: int): + pe_shape = model_dict[key].shape + temp_pe = rearrange(model_dict[key], "(t b) f d -> t b f d", t=1) + temp_pe = F.interpolate(temp_pe, size=(new_length, pe_shape[-1]), mode="bilinear") + temp_pe = rearrange(temp_pe, "t b f d -> (t b) f d", t=1) + model_dict[key] = temp_pe + del temp_pe + + +def interpolate_pe_to_length_diffs(model_dict: dict[str, Tensor], key: str, new_length: int): + # TODO: fill out and try out + pe_shape = model_dict[key].shape + temp_pe = rearrange(model_dict[key], "(t b) f d -> t b f d", t=1) + temp_pe = F.interpolate(temp_pe, size=(new_length, pe_shape[-1]), mode="bilinear") + temp_pe = rearrange(temp_pe, "t b f d -> (t b) f d", t=1) + model_dict[key] = temp_pe + del temp_pe + + +def interpolate_pe_to_length_pingpong(model_dict: dict[str, Tensor], key: str, new_length: int): + if model_dict[key].shape[1] < new_length: + temp_pe = model_dict[key] + flipped_temp_pe = torch.flip(temp_pe[:, 1:-1, :], [1]) + use_flipped = True + preview_pe = None + while model_dict[key].shape[1] < new_length: + preview_pe = model_dict[key] + model_dict[key] = torch.cat([model_dict[key], flipped_temp_pe if use_flipped else temp_pe], dim=1) + use_flipped = not use_flipped + del temp_pe + del flipped_temp_pe + del preview_pe + model_dict[key] = model_dict[key][:, :new_length] + + +def freeze_mask_of_pe(model_dict: dict[str, Tensor], key: str): + pe_portion = model_dict[key].shape[2] // 64 + first_pe = model_dict[key][:,:1,:] + model_dict[key][:,:,pe_portion:] = first_pe[:,:,pe_portion:] + del first_pe + + +def freeze_mask_of_attn(model_dict: dict[str, Tensor], key: str): + attn_portion = model_dict[key].shape[0] // 2 + model_dict[key][:attn_portion,:attn_portion] *= 1.5 + + +def apply_mm_settings(model_dict: dict[str, Tensor], mm_settings: AnimateDiffSettings) -> dict[str, Tensor]: + if mm_settings is None: + return model_dict + if not mm_settings.has_anything_to_apply(): + return model_dict + # first, handle PE Adjustments + for adjust in mm_settings.adjust_pe.adjusts: + if adjust.has_anything_to_apply(): + already_printed = False + for key in model_dict: + if "attention_blocks" in key and "pos_encoder" in key: + # apply simple motion pe stretch, if needed + if adjust.has_motion_pe_stretch(): + original_length = model_dict[key].shape[1] + new_pe_length = original_length + adjust.motion_pe_stretch + interpolate_pe_to_length(model_dict, key, new_length=new_pe_length) + if adjust.print_adjustment and not already_printed: + logger.info(f"[Adjust PE]: PE Stretch from {original_length} to {new_pe_length}.") + # apply pe_idx_offset, if needed + if adjust.has_initial_pe_idx_offset(): + original_length = model_dict[key].shape[1] + model_dict[key] = model_dict[key][:, adjust.initial_pe_idx_offset:] + if adjust.print_adjustment and not already_printed: + logger.info(f"[Adjust PE]: Offsetting PEs by {adjust.initial_pe_idx_offset}; PE length to shortens from {original_length} to {model_dict[key].shape[1]}.") + # apply has_cap_initial_pe_length, if needed + if adjust.has_cap_initial_pe_length(): + original_length = model_dict[key].shape[1] + model_dict[key] = model_dict[key][:, :adjust.cap_initial_pe_length] + if adjust.print_adjustment and not already_printed: + logger.info(f"[Adjust PE]: Capping PEs (initial) from {original_length} to {model_dict[key].shape[1]}.") + # apply interpolate_pe_to_length, if needed + if adjust.has_interpolate_pe_to_length(): + original_length = model_dict[key].shape[1] + interpolate_pe_to_length(model_dict, key, new_length=adjust.interpolate_pe_to_length) + if adjust.print_adjustment and not already_printed: + logger.info(f"[Adjust PE]: Interpolating PE length from {original_length} to {model_dict[key].shape[1]}.") + # apply final_pe_idx_offset, if needed + if adjust.has_final_pe_idx_offset(): + original_length = model_dict[key].shape[1] + model_dict[key] = model_dict[key][:, adjust.final_pe_idx_offset:] + if adjust.print_adjustment and not already_printed: + logger.info(f"[Adjust PE]: Capping PEs (final) from {original_length} to {model_dict[key].shape[1]}.") + already_printed = True + # finally, apply any weight changes + for key in model_dict: + if "attention_blocks" in key: + if "pos_encoder" in key and mm_settings.adjust_pe.has_anything_to_apply(): + # apply pe_strength, if needed + if mm_settings.has_pe_strength(): + model_dict[key] *= mm_settings.pe_strength + else: + # apply attn_strenth, if needed + if mm_settings.has_attn_strength(): + model_dict[key] *= mm_settings.attn_strength + # apply specific attn_strengths, if needed + if mm_settings.has_any_attn_sub_strength(): + if "to_q" in key and mm_settings.has_attn_q_strength(): + model_dict[key] *= mm_settings.attn_q_strength + elif "to_k" in key and mm_settings.has_attn_k_strength(): + model_dict[key] *= mm_settings.attn_k_strength + elif "to_v" in key and mm_settings.has_attn_v_strength(): + model_dict[key] *= mm_settings.attn_v_strength + elif "to_out" in key: + if key.strip().endswith("weight") and mm_settings.has_attn_out_weight_strength(): + model_dict[key] *= mm_settings.attn_out_weight_strength + elif key.strip().endswith("bias") and mm_settings.has_attn_out_bias_strength(): + model_dict[key] *= mm_settings.attn_out_bias_strength + # apply other strength, if needed + elif mm_settings.has_other_strength(): + model_dict[key] *= mm_settings.other_strength + return model_dict + + +class InjectionParams: + def __init__(self, unlimited_area_hack: bool=False, apply_mm_groupnorm_hack: bool=True, model_name: str="", + apply_v2_properly: bool=True) -> None: + self.full_length = None + self.unlimited_area_hack = unlimited_area_hack + self.apply_mm_groupnorm_hack = apply_mm_groupnorm_hack + self.model_name = model_name + self.apply_v2_properly = apply_v2_properly + self.context_options: ContextOptionsGroup = ContextOptionsGroup.default() + self.motion_model_settings = AnimateDiffSettings() # Gen1 + self.sub_idxs = None # value should NOT be included in clone, so it will auto reset + + def set_noise_extra_args(self, noise_extra_args: dict): + noise_extra_args["context_options"] = self.context_options.clone() + + def set_context(self, context_options: ContextOptionsGroup): + self.context_options = context_options.clone() if context_options else ContextOptionsGroup.default() + + def is_using_sliding_context(self) -> bool: + return self.context_options.context_length is not None + + def set_motion_model_settings(self, motion_model_settings: AnimateDiffSettings): # Gen1 + if motion_model_settings is None: + self.motion_model_settings = AnimateDiffSettings() + else: + self.motion_model_settings = motion_model_settings + + def reset_context(self): + self.context_options = ContextOptionsGroup.default() + + def clone(self) -> 'InjectionParams': + new_params = InjectionParams( + self.unlimited_area_hack, self.apply_mm_groupnorm_hack, + self.model_name, apply_v2_properly=self.apply_v2_properly, + ) + new_params.full_length = self.full_length + new_params.set_context(self.context_options) + new_params.set_motion_model_settings(self.motion_model_settings) # Gen1 + return new_params diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_lora.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_lora.py new file mode 100644 index 0000000000000000000000000000000000000000..d96259dd053f1e995ad955e22e8ee18597001daf --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_lora.py @@ -0,0 +1,25 @@ +class MotionLoraInfo: + def __init__(self, name: str, strength: float = 1.0, hash: str=""): + self.name = name + self.strength = strength + self.hash = "" + + def set_hash(self, hash: str): + self.hash = hash + + def clone(self): + return MotionLoraInfo(self.name, self.strength, self.hash) + + +class MotionLoraList: + def __init__(self): + self.loras: list[MotionLoraInfo] = [] + + def add_lora(self, lora: MotionLoraInfo): + self.loras.append(lora) + + def clone(self): + new_list = MotionLoraList() + for lora in self.loras: + new_list.add_lora(lora.clone()) + return new_list diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module_ad.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module_ad.py new file mode 100644 index 0000000000000000000000000000000000000000..c1b489e1268b715e976cf62b8c14816b3cb6ad5e --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/motion_module_ad.py @@ -0,0 +1,971 @@ +import math +from typing import Iterable, Tuple, Union +import re + +import torch +from einops import rearrange, repeat +from torch import Tensor, nn + +from comfy.ldm.modules.attention import FeedForward, SpatialTransformer +from comfy.model_patcher import ModelPatcher +from comfy.ldm.modules.diffusionmodules import openaimodel +from comfy.ldm.modules.diffusionmodules.openaimodel import SpatialTransformer +from comfy.controlnet import broadcast_image_to +from comfy.utils import repeat_to_batch_size +import comfy.ops +import comfy.model_management + +from .context import ContextFuseMethod, ContextOptions, get_context_weights, get_context_windows +from .utils_motion import CrossAttentionMM, MotionCompatibilityError, extend_to_batch_size, prepare_mask_batch +from .utils_model import BetaSchedules, ModelTypeSD +from .logger import logger + + +def zero_module(module): + # Zero out the parameters of a module and return it. + for p in module.parameters(): + p.detach().zero_() + return module + + +class AnimateDiffFormat: + ANIMATEDIFF = "AnimateDiff" + HOTSHOTXL = "HotshotXL" + ANIMATELCM = "AnimateLCM" + + +class AnimateDiffVersion: + V1 = "v1" + V2 = "v2" + V3 = "v3" + + +class AnimateDiffInfo: + def __init__(self, sd_type: str, mm_format: str, mm_version: str, mm_name: str): + self.sd_type = sd_type + self.mm_format = mm_format + self.mm_version = mm_version + self.mm_name = mm_name + + def get_string(self): + return f"{self.mm_name}:{self.mm_version}:{self.mm_format}:{self.sd_type}" + + +def is_hotshotxl(mm_state_dict: dict[str, Tensor]) -> bool: + # use pos_encoder naming to determine if hotshotxl model + for key in mm_state_dict.keys(): + if key.endswith("pos_encoder.positional_encoding"): + return True + return False + + +def is_animatelcm(mm_state_dict: dict[str, Tensor]) -> bool: + # use lack of ANY pos_encoder keys to determine if animatelcm model + for key in mm_state_dict.keys(): + if "pos_encoder" in key: + return False + return True + + +def get_down_block_max(mm_state_dict: dict[str, Tensor]) -> int: + # keep track of biggest down_block count in module + biggest_block = 0 + for key in mm_state_dict.keys(): + if "down_blocks" in key: + try: + block_int = key.split(".")[1] + block_num = int(block_int) + if block_num > biggest_block: + biggest_block = block_num + except ValueError: + pass + return biggest_block + + +def has_mid_block(mm_state_dict: dict[str, Tensor]): + # check if keys contain mid_block + for key in mm_state_dict.keys(): + if key.startswith("mid_block."): + return True + return False + + +def get_position_encoding_max_len(mm_state_dict: dict[str, Tensor], mm_name: str, mm_format: str) -> Union[int, None]: + # use pos_encoder.pe entries to determine max length - [1, {max_length}, {320|640|1280}] + for key in mm_state_dict.keys(): + if key.endswith("pos_encoder.pe"): + return mm_state_dict[key].size(1) # get middle dim + # AnimateLCM models should have no pos_encoder entries, and assumed to be 64 + if mm_format == AnimateDiffFormat.ANIMATELCM: + return 64 + raise MotionCompatibilityError(f"No pos_encoder.pe found in mm_state_dict - {mm_name} is not a valid AnimateDiff motion module!") + + +_regex_hotshotxl_module_num = re.compile(r'temporal_attentions\.(\d+)\.') +def find_hotshot_module_num(key: str) -> Union[int, None]: + found = _regex_hotshotxl_module_num.search(key) + if found: + return int(found.group(1)) + return None + + +def normalize_ad_state_dict(mm_state_dict: dict[str, Tensor], mm_name: str) -> Tuple[dict[str, Tensor], AnimateDiffInfo]: + # from pathlib import Path + # with open(Path(__file__).parent.parent.parent / f"keys_{mm_name}.txt", "w") as afile: + # for key, value in mm_state_dict.items(): + # afile.write(f"{key}:\t{value.shape}\n") + + # remove all non-temporal keys (in case model has extra stuff in it) + for key in list(mm_state_dict.keys()): + if "temporal" not in key: + del mm_state_dict[key] + # determine what SD model the motion module is intended for + sd_type: str = None + down_block_max = get_down_block_max(mm_state_dict) + if down_block_max == 3: + sd_type = ModelTypeSD.SD1_5 + elif down_block_max == 2: + sd_type = ModelTypeSD.SDXL + else: + raise ValueError(f"'{mm_name}' is not a valid SD1.5 nor SDXL motion module - contained {down_block_max} downblocks.") + # determine the model's format + mm_format = AnimateDiffFormat.ANIMATEDIFF + if is_hotshotxl(mm_state_dict): + mm_format = AnimateDiffFormat.HOTSHOTXL + if is_animatelcm(mm_state_dict): + mm_format = AnimateDiffFormat.ANIMATELCM + # determine the model's version + mm_version = AnimateDiffVersion.V1 + if has_mid_block(mm_state_dict): + mm_version = AnimateDiffVersion.V2 + elif sd_type==ModelTypeSD.SD1_5 and get_position_encoding_max_len(mm_state_dict, mm_name, mm_format)==32: + mm_version = AnimateDiffVersion.V3 + info = AnimateDiffInfo(sd_type=sd_type, mm_format=mm_format, mm_version=mm_version, mm_name=mm_name) + # convert to AnimateDiff format, if needed + if mm_format == AnimateDiffFormat.HOTSHOTXL: + # HotshotXL is AD-based architecture applied to SDXL instead of SD1.5 + # By renaming the keys, no code needs to be adapted at all + # + # reformat temporal_attentions: + # HSXL: temporal_attentions.#. + # AD: motion_modules.#.temporal_transformer. + # HSXL: pos_encoder.positional_encoding + # AD: pos_encoder.pe + for key in list(mm_state_dict.keys()): + module_num = find_hotshot_module_num(key) + if module_num is not None: + new_key = key.replace(f"temporal_attentions.{module_num}", + f"motion_modules.{module_num}.temporal_transformer", 1) + new_key = new_key.replace("pos_encoder.positional_encoding", "pos_encoder.pe") + mm_state_dict[new_key] = mm_state_dict[key] + del mm_state_dict[key] + # return adjusted mm_state_dict and info + return mm_state_dict, info + + +class BlockType: + UP = "up" + DOWN = "down" + MID = "mid" + + +class AnimateDiffModel(nn.Module): + def __init__(self, mm_state_dict: dict[str, Tensor], mm_info: AnimateDiffInfo): + super().__init__() + self.mm_info = mm_info + self.down_blocks: Iterable[MotionModule] = nn.ModuleList([]) + self.up_blocks: Iterable[MotionModule] = nn.ModuleList([]) + self.mid_block: Union[MotionModule, None] = None + self.encoding_max_len = get_position_encoding_max_len(mm_state_dict, mm_info.mm_name, mm_info.mm_format) + self.has_position_encoding = self.encoding_max_len is not None + # determine ops to use (to support fp8 properly) + if comfy.model_management.unet_manual_cast(comfy.model_management.unet_dtype(), comfy.model_management.get_torch_device()) is None: + ops = comfy.ops.disable_weight_init + else: + ops = comfy.ops.manual_cast + # SDXL has 3 up/down blocks, SD1.5 has 4 up/down blocks + if mm_info.sd_type == ModelTypeSD.SDXL: + layer_channels = (320, 640, 1280) + else: + layer_channels = (320, 640, 1280, 1280) + # fill out down/up blocks and middle block, if present + for c in layer_channels: + self.down_blocks.append(MotionModule(c, temporal_position_encoding=self.has_position_encoding, + temporal_position_encoding_max_len=self.encoding_max_len, block_type=BlockType.DOWN, ops=ops)) + for c in reversed(layer_channels): + self.up_blocks.append(MotionModule(c, temporal_position_encoding=self.has_position_encoding, + temporal_position_encoding_max_len=self.encoding_max_len, block_type=BlockType.UP, ops=ops)) + if has_mid_block(mm_state_dict): + self.mid_block = MotionModule(1280, temporal_position_encoding=self.has_position_encoding, + temporal_position_encoding_max_len=self.encoding_max_len, block_type=BlockType.MID, ops=ops) + self.AD_video_length: int = 24 + + def get_device_debug(self): + return self.down_blocks[0].motion_modules[0].temporal_transformer.proj_in.weight.device + + def is_length_valid_for_encoding_max_len(self, length: int): + if self.encoding_max_len is None: + return True + return length <= self.encoding_max_len + + def get_best_beta_schedule(self, log=False) -> str: + to_return = None + if self.mm_info.sd_type == ModelTypeSD.SD1_5: + if self.mm_info.mm_format == AnimateDiffFormat.ANIMATELCM: + to_return = BetaSchedules.LCM # while LCM_100 is the intended schedule, I find LCM to have much less flicker + else: + to_return = BetaSchedules.SQRT_LINEAR + elif self.mm_info.sd_type == ModelTypeSD.SDXL: + if self.mm_info.mm_format == AnimateDiffFormat.HOTSHOTXL: + to_return = BetaSchedules.LINEAR + else: + to_return = BetaSchedules.LINEAR_ADXL + if to_return is not None: + if log: logger.info(f"[Autoselect]: '{to_return}' beta_schedule for {self.mm_info.get_string()}") + else: + to_return = BetaSchedules.USE_EXISTING + if log: logger.info(f"[Autoselect]: could not find beta_schedule for {self.mm_info.get_string()}, defaulting to '{to_return}'") + return to_return + + def cleanup(self): + pass + + def inject(self, model: ModelPatcher): + unet: openaimodel.UNetModel = model.model.diffusion_model + # inject input (down) blocks + # SD15 mm contains 4 downblocks, each with 2 TemporalTransformers - 8 in total + # SDXL mm contains 3 downblocks, each with 2 TemporalTransformers - 6 in total + self._inject(unet.input_blocks, self.down_blocks) + # inject output (up) blocks + # SD15 mm contains 4 upblocks, each with 3 TemporalTransformers - 12 in total + # SDXL mm contains 3 upblocks, each with 3 TemporalTransformers - 9 in total + self._inject(unet.output_blocks, self.up_blocks) + # inject mid block, if needed (encapsulate in list to make structure compatible) + if self.mid_block is not None: + self._inject([unet.middle_block], [self.mid_block]) + del unet + + def _inject(self, unet_blocks: nn.ModuleList, mm_blocks: nn.ModuleList): + # Rules for injection: + # For each component list in a unet block: + # if SpatialTransformer exists in list, place next block after last occurrence + # elif ResBlock exists in list, place next block after first occurrence + # else don't place block + injection_count = 0 + unet_idx = 0 + # details about blocks passed in + per_block = len(mm_blocks[0].motion_modules) + injection_goal = len(mm_blocks) * per_block + # only stop injecting when modules exhausted + while injection_count < injection_goal: + # figure out which VanillaTemporalModule from mm to inject + mm_blk_idx, mm_vtm_idx = injection_count // per_block, injection_count % per_block + # figure out layout of unet block components + st_idx = -1 # SpatialTransformer index + res_idx = -1 # first ResBlock index + # first, figure out indeces of relevant blocks + for idx, component in enumerate(unet_blocks[unet_idx]): + if type(component) == SpatialTransformer: + st_idx = idx + elif type(component).__name__ == "ResBlock" and res_idx < 0: + res_idx = idx + # if SpatialTransformer exists, inject right after + if st_idx >= 0: + #logger.info(f"AD: injecting after ST({st_idx})") + unet_blocks[unet_idx].insert(st_idx+1, mm_blocks[mm_blk_idx].motion_modules[mm_vtm_idx]) + injection_count += 1 + # otherwise, if only ResBlock exists, inject right after + elif res_idx >= 0: + #logger.info(f"AD: injecting after Res({res_idx})") + unet_blocks[unet_idx].insert(res_idx+1, mm_blocks[mm_blk_idx].motion_modules[mm_vtm_idx]) + injection_count += 1 + # increment unet_idx + unet_idx += 1 + + def eject(self, model: ModelPatcher): + unet: openaimodel.UNetModel = model.model.diffusion_model + # remove from input blocks (downblocks) + self._eject(unet.input_blocks) + # remove from output blocks (upblocks) + self._eject(unet.output_blocks) + # remove from middle block (encapsulate in list to make compatible) + self._eject([unet.middle_block]) + del unet + + def _eject(self, unet_blocks: nn.ModuleList): + # eject all VanillaTemporalModule objects from all blocks + for block in unet_blocks: + idx_to_pop = [] + for idx, component in enumerate(block): + if type(component) == VanillaTemporalModule: + idx_to_pop.append(idx) + # pop in backwards order, as to not disturb what the indeces refer to + for idx in sorted(idx_to_pop, reverse=True): + block.pop(idx) + + def set_video_length(self, video_length: int, full_length: int): + self.AD_video_length = video_length + for block in self.down_blocks: + block.set_video_length(video_length, full_length) + for block in self.up_blocks: + block.set_video_length(video_length, full_length) + if self.mid_block is not None: + self.mid_block.set_video_length(video_length, full_length) + + def set_scale(self, multival: Union[float, Tensor]): + if multival is None: + multival = 1.0 + if type(multival) == Tensor: + self._set_scale_multiplier(1.0) + self._set_scale_mask(multival) + else: + self._set_scale_multiplier(multival) + self._set_scale_mask(None) + + def set_effect(self, multival: Union[float, Tensor]): + for block in self.down_blocks: + block.set_effect(multival) + for block in self.up_blocks: + block.set_effect(multival) + if self.mid_block is not None: + self.mid_block.set_effect(multival) + + def set_sub_idxs(self, sub_idxs: list[int]): + for block in self.down_blocks: + block.set_sub_idxs(sub_idxs) + for block in self.up_blocks: + block.set_sub_idxs(sub_idxs) + if self.mid_block is not None: + self.mid_block.set_sub_idxs(sub_idxs) + + def set_view_options(self, view_options: ContextOptions): + for block in self.down_blocks: + block.set_view_options(view_options) + for block in self.up_blocks: + block.set_view_options(view_options) + if self.mid_block is not None: + self.mid_block.set_view_options(view_options) + + def reset(self): + self._reset_sub_idxs() + self._reset_scale_multiplier() + self._reset_temp_vars() + + def _set_scale_multiplier(self, multiplier: Union[float, None]): + for block in self.down_blocks: + block.set_scale_multiplier(multiplier) + for block in self.up_blocks: + block.set_scale_multiplier(multiplier) + if self.mid_block is not None: + self.mid_block.set_scale_multiplier(multiplier) + + def _set_scale_mask(self, mask: Tensor): + for block in self.down_blocks: + block.set_scale_mask(mask) + for block in self.up_blocks: + block.set_scale_mask(mask) + if self.mid_block is not None: + self.mid_block.set_scale_mask(mask) + + def _reset_temp_vars(self): + for block in self.down_blocks: + block.reset_temp_vars() + for block in self.up_blocks: + block.reset_temp_vars() + if self.mid_block is not None: + self.mid_block.reset_temp_vars() + + def _reset_scale_multiplier(self): + self._set_scale_multiplier(None) + + def _reset_sub_idxs(self): + self.set_sub_idxs(None) + + +class MotionModule(nn.Module): + def __init__(self, + in_channels, + temporal_position_encoding=True, + temporal_position_encoding_max_len=24, + block_type: str=BlockType.DOWN, + ops=comfy.ops.disable_weight_init + ): + super().__init__() + if block_type == BlockType.MID: + # mid blocks contain only a single VanillaTemporalModule + self.motion_modules: Iterable[VanillaTemporalModule] = nn.ModuleList([get_motion_module(in_channels, temporal_position_encoding, temporal_position_encoding_max_len, ops=ops)]) + else: + # down blocks contain two VanillaTemporalModules + self.motion_modules: Iterable[VanillaTemporalModule] = nn.ModuleList( + [ + get_motion_module(in_channels, temporal_position_encoding, temporal_position_encoding_max_len, ops=ops), + get_motion_module(in_channels, temporal_position_encoding, temporal_position_encoding_max_len, ops=ops) + ] + ) + # up blocks contain one additional VanillaTemporalModule + if block_type == BlockType.UP: + self.motion_modules.append(get_motion_module(in_channels, temporal_position_encoding, temporal_position_encoding_max_len, ops=ops)) + + def set_video_length(self, video_length: int, full_length: int): + for motion_module in self.motion_modules: + motion_module.set_video_length(video_length, full_length) + + def set_scale_multiplier(self, multiplier: Union[float, None]): + for motion_module in self.motion_modules: + motion_module.set_scale_multiplier(multiplier) + + def set_scale_mask(self, mask: Tensor): + for motion_module in self.motion_modules: + motion_module.set_scale_mask(mask) + + def set_effect(self, multival: Union[float, Tensor]): + for motion_module in self.motion_modules: + motion_module.set_effect(multival) + + def set_sub_idxs(self, sub_idxs: list[int]): + for motion_module in self.motion_modules: + motion_module.set_sub_idxs(sub_idxs) + + def set_view_options(self, view_options: ContextOptions): + for motion_module in self.motion_modules: + motion_module.set_view_options(view_options=view_options) + + def reset_temp_vars(self): + for motion_module in self.motion_modules: + motion_module.reset_temp_vars() + + +def get_motion_module(in_channels, temporal_position_encoding, temporal_position_encoding_max_len, ops=comfy.ops.disable_weight_init): + return VanillaTemporalModule(in_channels=in_channels, temporal_position_encoding=temporal_position_encoding, temporal_position_encoding_max_len=temporal_position_encoding_max_len, ops=ops) + + +class VanillaTemporalModule(nn.Module): + def __init__( + self, + in_channels, + num_attention_heads=8, + num_transformer_block=1, + attention_block_types=("Temporal_Self", "Temporal_Self"), + cross_frame_attention_mode=None, + temporal_position_encoding=True, + temporal_position_encoding_max_len=24, + temporal_attention_dim_div=1, + zero_initialize=True, + ops=comfy.ops.disable_weight_init, + ): + super().__init__() + + self.video_length = 16 + self.full_length = 16 + self.sub_idxs = None + self.view_options = None + + self.effect = None + self.temp_effect_mask: Tensor = None + self.prev_input_tensor_batch = 0 + + self.temporal_transformer = TemporalTransformer3DModel( + in_channels=in_channels, + num_attention_heads=num_attention_heads, + attention_head_dim=in_channels + // num_attention_heads + // temporal_attention_dim_div, + num_layers=num_transformer_block, + attention_block_types=attention_block_types, + cross_frame_attention_mode=cross_frame_attention_mode, + temporal_position_encoding=temporal_position_encoding, + temporal_position_encoding_max_len=temporal_position_encoding_max_len, + ops=ops + ) + + if zero_initialize: + self.temporal_transformer.proj_out = zero_module( + self.temporal_transformer.proj_out + ) + + def set_video_length(self, video_length: int, full_length: int): + self.video_length = video_length + self.full_length = full_length + self.temporal_transformer.set_video_length(video_length, full_length) + + def set_scale_multiplier(self, multiplier: Union[float, None]): + self.temporal_transformer.set_scale_multiplier(multiplier) + + def set_scale_mask(self, mask: Tensor): + self.temporal_transformer.set_scale_mask(mask) + + def set_effect(self, multival: Union[float, Tensor]): + if type(multival) == Tensor: + self.effect = multival + elif multival is not None and math.isclose(multival, 1.0): + self.effect = None + else: + self.effect = multival + self.temp_effect_mask = None + + def set_sub_idxs(self, sub_idxs: list[int]): + self.sub_idxs = sub_idxs + self.temporal_transformer.set_sub_idxs(sub_idxs) + + def set_view_options(self, view_options: ContextOptions): + self.view_options = view_options + + def reset_temp_vars(self): + self.set_effect(None) + self.set_view_options(None) + self.temporal_transformer.reset_temp_vars() + + def get_effect_mask(self, input_tensor: Tensor): + batch, channel, height, width = input_tensor.shape + batched_number = batch // self.video_length + full_batched_idxs = list(range(self.video_length))*batched_number + # if there is a cached temp_effect_mask and it is valid for current input, return it + if batch == self.prev_input_tensor_batch and self.temp_effect_mask is not None: + if self.sub_idxs is not None: + return self.temp_effect_mask[self.sub_idxs*batched_number] + return self.temp_effect_mask[full_batched_idxs] + # clear any existing mask + del self.temp_effect_mask + self.temp_effect_mask = None + # recalculate temp mask + self.prev_input_tensor_batch = batch + # make sure mask matches expected dimensions + mask = prepare_mask_batch(self.effect, shape=(self.full_length, 1, height, width)) + # make sure mask is as long as full_length - clone last element of list if too short + self.temp_effect_mask = extend_to_batch_size(mask, self.full_length).to( + dtype=input_tensor.dtype, device=input_tensor.device) + # return finalized mask + if self.sub_idxs is not None: + return self.temp_effect_mask[self.sub_idxs*batched_number] + return self.temp_effect_mask[full_batched_idxs] + + def forward(self, input_tensor: Tensor, encoder_hidden_states=None, attention_mask=None): + if self.effect is None: + return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask, self.view_options) + # return weighted average of input_tensor and AD output + if type(self.effect) != Tensor: + effect = self.effect + # do nothing if effect is 0 + if math.isclose(effect, 0.0): + return input_tensor + else: + effect = self.get_effect_mask(input_tensor) + return input_tensor*(1.0-effect) + self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask, self.view_options)*effect + + +class TemporalTransformer3DModel(nn.Module): + def __init__( + self, + in_channels, + num_attention_heads, + attention_head_dim, + num_layers, + attention_block_types=( + "Temporal_Self", + "Temporal_Self", + ), + dropout=0.0, + norm_num_groups=32, + cross_attention_dim=768, + activation_fn="geglu", + attention_bias=False, + upcast_attention=False, + cross_frame_attention_mode=None, + temporal_position_encoding=False, + temporal_position_encoding_max_len=24, + ops=comfy.ops.disable_weight_init, + ): + super().__init__() + self.video_length = 16 + self.full_length = 16 + self.raw_scale_mask: Union[Tensor, None] = None + self.temp_scale_mask: Union[Tensor, None] = None + self.sub_idxs: Union[list[int], None] = None + self.prev_hidden_states_batch = 0 + + + inner_dim = num_attention_heads * attention_head_dim + + self.norm = ops.GroupNorm( + num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True + ) + self.proj_in = ops.Linear(in_channels, inner_dim) + + self.transformer_blocks: Iterable[TemporalTransformerBlock] = nn.ModuleList( + [ + TemporalTransformerBlock( + dim=inner_dim, + num_attention_heads=num_attention_heads, + attention_head_dim=attention_head_dim, + attention_block_types=attention_block_types, + dropout=dropout, + norm_num_groups=norm_num_groups, + cross_attention_dim=cross_attention_dim, + activation_fn=activation_fn, + attention_bias=attention_bias, + upcast_attention=upcast_attention, + cross_frame_attention_mode=cross_frame_attention_mode, + temporal_position_encoding=temporal_position_encoding, + temporal_position_encoding_max_len=temporal_position_encoding_max_len, + ops=ops, + ) + for d in range(num_layers) + ] + ) + self.proj_out = ops.Linear(inner_dim, in_channels) + + def set_video_length(self, video_length: int, full_length: int): + self.video_length = video_length + self.full_length = full_length + + def set_scale_multiplier(self, multiplier: Union[float, None]): + for block in self.transformer_blocks: + block.set_scale_multiplier(multiplier) + + def set_scale_mask(self, mask: Tensor): + self.raw_scale_mask = mask + self.temp_scale_mask = None + + def set_sub_idxs(self, sub_idxs: list[int]): + self.sub_idxs = sub_idxs + for block in self.transformer_blocks: + block.set_sub_idxs(sub_idxs) + + def reset_temp_vars(self): + del self.temp_scale_mask + self.temp_scale_mask = None + self.prev_hidden_states_batch = 0 + + def get_scale_mask(self, hidden_states: Tensor) -> Union[Tensor, None]: + # if no raw mask, return None + if self.raw_scale_mask is None: + return None + shape = hidden_states.shape + batch, channel, height, width = shape + # if temp mask already calculated, return it + if self.temp_scale_mask != None: + # check if hidden_states batch matches + if batch == self.prev_hidden_states_batch: + if self.sub_idxs is not None: + return self.temp_scale_mask[:, self.sub_idxs, :] + return self.temp_scale_mask + # if does not match, reset cached temp_scale_mask and recalculate it + del self.temp_scale_mask + self.temp_scale_mask = None + # otherwise, calculate temp mask + self.prev_hidden_states_batch = batch + mask = prepare_mask_batch(self.raw_scale_mask, shape=(self.full_length, 1, height, width)) + mask = repeat_to_batch_size(mask, self.full_length) + # if mask not the same amount length as full length, make it match + if self.full_length != mask.shape[0]: + mask = broadcast_image_to(mask, self.full_length, 1) + # reshape mask to attention K shape (h*w, latent_count, 1) + batch, channel, height, width = mask.shape + # first, perform same operations as on hidden_states, + # turning (b, c, h, w) -> (b, h*w, c) + mask = mask.permute(0, 2, 3, 1).reshape(batch, height*width, channel) + # then, make it the same shape as attention's k, (h*w, b, c) + mask = mask.permute(1, 0, 2) + # make masks match the expected length of h*w + batched_number = shape[0] // self.video_length + if batched_number > 1: + mask = torch.cat([mask] * batched_number, dim=0) + # cache mask and set to proper device + self.temp_scale_mask = mask + # move temp_scale_mask to proper dtype + device + self.temp_scale_mask = self.temp_scale_mask.to(dtype=hidden_states.dtype, device=hidden_states.device) + # return subset of masks, if needed + if self.sub_idxs is not None: + return self.temp_scale_mask[:, self.sub_idxs, :] + return self.temp_scale_mask + + def forward(self, hidden_states, encoder_hidden_states=None, attention_mask=None, view_options: ContextOptions=None): + batch, channel, height, width = hidden_states.shape + residual = hidden_states + scale_mask = self.get_scale_mask(hidden_states) + # add some casts for fp8 purposes - does not affect speed otherwise + hidden_states = self.norm(hidden_states).to(hidden_states.dtype) + inner_dim = hidden_states.shape[1] + hidden_states = hidden_states.permute(0, 2, 3, 1).reshape( + batch, height * width, inner_dim + ) + hidden_states = self.proj_in(hidden_states).to(hidden_states.dtype) + + # Transformer Blocks + for block in self.transformer_blocks: + hidden_states = block( + hidden_states, + encoder_hidden_states=encoder_hidden_states, + attention_mask=attention_mask, + video_length=self.video_length, + scale_mask=scale_mask, + view_options=view_options + ) + + # output + hidden_states = self.proj_out(hidden_states) + hidden_states = ( + hidden_states.reshape(batch, height, width, inner_dim) + .permute(0, 3, 1, 2) + .contiguous() + ) + + output = hidden_states + residual + + return output + + +class TemporalTransformerBlock(nn.Module): + def __init__( + self, + dim, + num_attention_heads, + attention_head_dim, + attention_block_types=( + "Temporal_Self", + "Temporal_Self", + ), + dropout=0.0, + norm_num_groups=32, + cross_attention_dim=768, + activation_fn="geglu", + attention_bias=False, + upcast_attention=False, + cross_frame_attention_mode=None, + temporal_position_encoding=False, + temporal_position_encoding_max_len=24, + ops=comfy.ops.disable_weight_init, + ): + super().__init__() + + attention_blocks = [] + norms = [] + + for block_name in attention_block_types: + attention_blocks.append( + VersatileAttention( + attention_mode=block_name.split("_")[0], + context_dim=cross_attention_dim # called context_dim for ComfyUI impl + if block_name.endswith("_Cross") + else None, + query_dim=dim, + heads=num_attention_heads, + dim_head=attention_head_dim, + dropout=dropout, + #bias=attention_bias, # remove for Comfy CrossAttention + #upcast_attention=upcast_attention, # remove for Comfy CrossAttention + cross_frame_attention_mode=cross_frame_attention_mode, + temporal_position_encoding=temporal_position_encoding, + temporal_position_encoding_max_len=temporal_position_encoding_max_len, + ops=ops, + ) + ) + norms.append(ops.LayerNorm(dim)) + + self.attention_blocks: Iterable[VersatileAttention] = nn.ModuleList(attention_blocks) + self.norms = nn.ModuleList(norms) + + self.ff = FeedForward(dim, dropout=dropout, glu=(activation_fn == "geglu"), operations=ops) + self.ff_norm = ops.LayerNorm(dim) + + def set_scale_multiplier(self, multiplier: Union[float, None]): + for block in self.attention_blocks: + block.set_scale_multiplier(multiplier) + + def set_sub_idxs(self, sub_idxs: list[int]): + for block in self.attention_blocks: + block.set_sub_idxs(sub_idxs) + + def forward( + self, + hidden_states: Tensor, + encoder_hidden_states: Tensor=None, + attention_mask: Tensor=None, + video_length: int=None, + scale_mask: Tensor=None, + view_options: ContextOptions=None, + ): + # make view_options None if context_length > video_length, or if equal and equal not allowed + if view_options: + if view_options.context_length > video_length: + view_options = None + elif view_options.context_length == video_length and not view_options.use_on_equal_length: + view_options = None + if not view_options: + for attention_block, norm in zip(self.attention_blocks, self.norms): + norm_hidden_states = norm(hidden_states).to(hidden_states.dtype) + hidden_states = ( + attention_block( + norm_hidden_states, + encoder_hidden_states=encoder_hidden_states + if attention_block.is_cross_attention + else None, + attention_mask=attention_mask, + video_length=video_length, + scale_mask=scale_mask + ) + hidden_states + ) + else: + # views idea gotten from diffusers AnimateDiff FreeNoise implementation: + # https://github.com/arthur-qiu/FreeNoise-AnimateDiff/blob/main/animatediff/models/motion_module.py + # apply sliding context windows (views) + views = get_context_windows(num_frames=video_length, opts=view_options) + hidden_states = rearrange(hidden_states, "(b f) d c -> b f d c", f=video_length) + value_final = torch.zeros_like(hidden_states) + count_final = torch.zeros_like(hidden_states) + # bias_final = [0.0] * video_length + batched_conds = hidden_states.size(1) // video_length + for sub_idxs in views: + sub_hidden_states = rearrange(hidden_states[:, sub_idxs], "b f d c -> (b f) d c") + for attention_block, norm in zip(self.attention_blocks, self.norms): + norm_hidden_states = norm(sub_hidden_states).to(sub_hidden_states.dtype) + sub_hidden_states = ( + attention_block( + norm_hidden_states, + encoder_hidden_states=encoder_hidden_states # do these need to be changed for sub_idxs too? + if attention_block.is_cross_attention + else None, + attention_mask=attention_mask, + video_length=len(sub_idxs), + scale_mask=scale_mask[:, sub_idxs, :] if scale_mask is not None else scale_mask + ) + sub_hidden_states + ) + sub_hidden_states = rearrange(sub_hidden_states, "(b f) d c -> b f d c", f=len(sub_idxs)) + + # if view_options.fuse_method == ContextFuseMethod.RELATIVE: + # for pos, idx in enumerate(sub_idxs): + # # bias is the influence of a specific index in relation to the whole context window + # bias = 1 - abs(idx - (sub_idxs[0] + sub_idxs[-1]) / 2) / ((sub_idxs[-1] - sub_idxs[0] + 1e-2) / 2) + # bias = max(1e-2, bias) + # # take weighted averate relative to total bias of current idx + # bias_total = bias_final[idx] + # prev_weight = torch.tensor([bias_total / (bias_total + bias)], + # dtype=value_final.dtype, device=value_final.device).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) + # #prev_weight = torch.cat([prev_weight]*value_final.shape[1], dim=1) + # new_weight = torch.tensor([bias / (bias_total + bias)], + # dtype=value_final.dtype, device=value_final.device).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) + # #new_weight = torch.cat([new_weight]*value_final.shape[1], dim=1) + # test = value_final[:, idx:idx+1, :, :] + # value_final[:, idx:idx+1, :, :] = value_final[:, idx:idx+1, :, :] * prev_weight + sub_hidden_states[:, pos:pos+1, : ,:] * new_weight + # bias_final[idx] = bias_total + bias + # else: + weights = get_context_weights(len(sub_idxs), view_options.fuse_method) * batched_conds + weights_tensor = torch.Tensor(weights).to(device=hidden_states.device).unsqueeze(0).unsqueeze(-1).unsqueeze(-1) + value_final[:, sub_idxs] += sub_hidden_states * weights_tensor + count_final[:, sub_idxs] += weights_tensor + + # get weighted average of sub_hidden_states, if fuse method requires it + # if view_options.fuse_method != ContextFuseMethod.RELATIVE: + hidden_states = value_final / count_final + hidden_states = rearrange(hidden_states, "b f d c -> (b f) d c") + del value_final + del count_final + # del bias_final + + hidden_states = self.ff(self.ff_norm(hidden_states)) + hidden_states + + output = hidden_states + return output + + +class PositionalEncoding(nn.Module): + def __init__(self, d_model, dropout=0.0, max_len=24): + super().__init__() + self.dropout = nn.Dropout(p=dropout) + position = torch.arange(max_len).unsqueeze(1) + div_term = torch.exp( + torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model) + ) + pe = torch.zeros(1, max_len, d_model) + pe[0, :, 0::2] = torch.sin(position * div_term) + pe[0, :, 1::2] = torch.cos(position * div_term) + self.register_buffer("pe", pe) + self.sub_idxs = None + + def set_sub_idxs(self, sub_idxs: list[int]): + self.sub_idxs = sub_idxs + + def forward(self, x): + #if self.sub_idxs is not None: + # x = x + self.pe[:, self.sub_idxs] + #else: + x = x + self.pe[:, : x.size(1)] + return self.dropout(x) + + +class VersatileAttention(CrossAttentionMM): + def __init__( + self, + attention_mode=None, + cross_frame_attention_mode=None, + temporal_position_encoding=False, + temporal_position_encoding_max_len=24, + ops=comfy.ops.disable_weight_init, + *args, + **kwargs, + ): + super().__init__(operations=ops, *args, **kwargs) + assert attention_mode == "Temporal" + + self.attention_mode = attention_mode + self.is_cross_attention = kwargs["context_dim"] is not None + + self.pos_encoder = ( + PositionalEncoding( + kwargs["query_dim"], + dropout=0.0, + max_len=temporal_position_encoding_max_len, + ) + if (temporal_position_encoding and attention_mode == "Temporal") + else None + ) + + def extra_repr(self): + return f"(Module Info) Attention_Mode: {self.attention_mode}, Is_Cross_Attention: {self.is_cross_attention}" + + def set_scale_multiplier(self, multiplier: Union[float, None]): + if multiplier is None or math.isclose(multiplier, 1.0): + self.scale = 1.0 + else: + self.scale = multiplier + + def set_sub_idxs(self, sub_idxs: list[int]): + if self.pos_encoder != None: + self.pos_encoder.set_sub_idxs(sub_idxs) + + def forward( + self, + hidden_states: Tensor, + encoder_hidden_states=None, + attention_mask=None, + video_length=None, + scale_mask=None, + ): + if self.attention_mode != "Temporal": + raise NotImplementedError + + d = hidden_states.shape[1] + hidden_states = rearrange( + hidden_states, "(b f) d c -> (b d) f c", f=video_length + ) + + if self.pos_encoder is not None: + hidden_states = self.pos_encoder(hidden_states).to(hidden_states.dtype) + + encoder_hidden_states = ( + repeat(encoder_hidden_states, "b n c -> (b d) n c", d=d) + if encoder_hidden_states is not None + else encoder_hidden_states + ) + + hidden_states = super().forward( + hidden_states, + encoder_hidden_states, + value=None, + mask=attention_mask, + scale_mask=scale_mask, + ) + + hidden_states = rearrange(hidden_states, "(b d) f c -> (b f) d c", d=d) + + return hidden_states diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..3cfc5be27e62bca643d3df103d93f1033b2f217b --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes.py @@ -0,0 +1,149 @@ +import comfy.sample as comfy_sample + +from .sampling import motion_sample_factory + +from .nodes_gen1 import (AnimateDiffLoaderGen1, LegacyAnimateDiffLoaderWithContext, AnimateDiffModelSettings, + AnimateDiffModelSettingsSimple, AnimateDiffModelSettingsAdvanced, AnimateDiffModelSettingsAdvancedAttnStrengths) +from .nodes_gen2 import UseEvolvedSamplingNode, ApplyAnimateDiffModelNode, ApplyAnimateDiffModelBasicNode, LoadAnimateDiffModelNode, ADKeyframeNode +from .nodes_multival import MultivalDynamicNode, MultivalScaledMaskNode +from .nodes_sample import (FreeInitOptionsNode, NoiseLayerAddWeightedNode, SampleSettingsNode, NoiseLayerAddNode, NoiseLayerReplaceNode, IterationOptionsNode, + CustomCFGNode, CustomCFGKeyframeNode) +from .nodes_sigma_schedule import (SigmaScheduleNode, RawSigmaScheduleNode, WeightedAverageSigmaScheduleNode, InterpolatedWeightedAverageSigmaScheduleNode, SplitAndCombineSigmaScheduleNode) +from .nodes_context import (LegacyLoopedUniformContextOptionsNode, LoopedUniformContextOptionsNode, LoopedUniformViewOptionsNode, StandardUniformContextOptionsNode, StandardStaticContextOptionsNode, BatchedContextOptionsNode, + StandardStaticViewOptionsNode, StandardUniformViewOptionsNode, ViewAsContextOptionsNode) +from .nodes_ad_settings import AnimateDiffSettingsNode, ManualAdjustPENode, SweetspotStretchPENode, FullStretchPENode +from .nodes_extras import AnimateDiffUnload, EmptyLatentImageLarge, CheckpointLoaderSimpleWithNoiseSelect +from .nodes_deprecated import AnimateDiffLoader_Deprecated, AnimateDiffLoaderAdvanced_Deprecated, AnimateDiffCombine_Deprecated +from .nodes_lora import AnimateDiffLoraLoader, MaskedLoraLoader + +from .logger import logger + +# override comfy_sample.sample with animatediff-support version +comfy_sample.sample = motion_sample_factory(comfy_sample.sample) +comfy_sample.sample_custom = motion_sample_factory(comfy_sample.sample_custom, is_custom=True) + + +NODE_CLASS_MAPPINGS = { + # Unencapsulated + "ADE_AnimateDiffLoRALoader": AnimateDiffLoraLoader, + "ADE_AnimateDiffSamplingSettings": SampleSettingsNode, + "ADE_AnimateDiffKeyframe": ADKeyframeNode, + # Multival Nodes + "ADE_MultivalDynamic": MultivalDynamicNode, + "ADE_MultivalScaledMask": MultivalScaledMaskNode, + # Context Opts + "ADE_StandardStaticContextOptions": StandardStaticContextOptionsNode, + "ADE_StandardUniformContextOptions": StandardUniformContextOptionsNode, + "ADE_LoopedUniformContextOptions": LoopedUniformContextOptionsNode, + "ADE_ViewsOnlyContextOptions": ViewAsContextOptionsNode, + "ADE_BatchedContextOptions": BatchedContextOptionsNode, + "ADE_AnimateDiffUniformContextOptions": LegacyLoopedUniformContextOptionsNode, # Legacy + # View Opts + "ADE_StandardStaticViewOptions": StandardStaticViewOptionsNode, + "ADE_StandardUniformViewOptions": StandardUniformViewOptionsNode, + "ADE_LoopedUniformViewOptions": LoopedUniformViewOptionsNode, + # Iteration Opts + "ADE_IterationOptsDefault": IterationOptionsNode, + "ADE_IterationOptsFreeInit": FreeInitOptionsNode, + # Noise Layer Nodes + "ADE_NoiseLayerAdd": NoiseLayerAddNode, + "ADE_NoiseLayerAddWeighted": NoiseLayerAddWeightedNode, + "ADE_NoiseLayerReplace": NoiseLayerReplaceNode, + # AnimateDiff Settings + "ADE_AnimateDiffSettings": AnimateDiffSettingsNode, + "ADE_AdjustPESweetspotStretch": SweetspotStretchPENode, + "ADE_AdjustPEFullStretch": FullStretchPENode, + "ADE_AdjustPEManual": ManualAdjustPENode, + # Sample Settings + "ADE_CustomCFG": CustomCFGNode, + "ADE_CustomCFGKeyframe": CustomCFGKeyframeNode, + "ADE_SigmaSchedule": SigmaScheduleNode, + "ADE_RawSigmaSchedule": RawSigmaScheduleNode, + "ADE_SigmaScheduleWeightedAverage": WeightedAverageSigmaScheduleNode, + "ADE_SigmaScheduleWeightedAverageInterp": InterpolatedWeightedAverageSigmaScheduleNode, + "ADE_SigmaScheduleSplitAndCombine": SplitAndCombineSigmaScheduleNode, + # Extras Nodes + "ADE_AnimateDiffUnload": AnimateDiffUnload, + "ADE_EmptyLatentImageLarge": EmptyLatentImageLarge, + "CheckpointLoaderSimpleWithNoiseSelect": CheckpointLoaderSimpleWithNoiseSelect, + # Gen1 Nodes + "ADE_AnimateDiffLoaderGen1": AnimateDiffLoaderGen1, + "ADE_AnimateDiffLoaderWithContext": LegacyAnimateDiffLoaderWithContext, + "ADE_AnimateDiffModelSettings_Release": AnimateDiffModelSettings, + "ADE_AnimateDiffModelSettingsSimple": AnimateDiffModelSettingsSimple, + "ADE_AnimateDiffModelSettings": AnimateDiffModelSettingsAdvanced, + "ADE_AnimateDiffModelSettingsAdvancedAttnStrengths": AnimateDiffModelSettingsAdvancedAttnStrengths, + # Gen2 Nodes + "ADE_UseEvolvedSampling": UseEvolvedSamplingNode, + "ADE_ApplyAnimateDiffModelSimple": ApplyAnimateDiffModelBasicNode, + "ADE_ApplyAnimateDiffModel": ApplyAnimateDiffModelNode, + "ADE_LoadAnimateDiffModel": LoadAnimateDiffModelNode, + # MaskedLoraLoader + #"ADE_MaskedLoadLora": MaskedLoraLoader, + # Deprecated Nodes + "AnimateDiffLoaderV1": AnimateDiffLoader_Deprecated, + "ADE_AnimateDiffLoaderV1Advanced": AnimateDiffLoaderAdvanced_Deprecated, + "ADE_AnimateDiffCombine": AnimateDiffCombine_Deprecated, +} +NODE_DISPLAY_NAME_MAPPINGS = { + # Unencapsulated + "ADE_AnimateDiffLoRALoader": "Load AnimateDiff LoRA 🎭🅐🅓", + "ADE_AnimateDiffSamplingSettings": "Sample Settings 🎭🅐🅓", + "ADE_AnimateDiffKeyframe": "AnimateDiff Keyframe 🎭🅐🅓", + # Multival Nodes + "ADE_MultivalDynamic": "Multival Dynamic 🎭🅐🅓", + "ADE_MultivalScaledMask": "Multival Scaled Mask 🎭🅐🅓", + # Context Opts + "ADE_StandardStaticContextOptions": "Context Options◆Standard Static 🎭🅐🅓", + "ADE_StandardUniformContextOptions": "Context Options◆Standard Uniform 🎭🅐🅓", + "ADE_LoopedUniformContextOptions": "Context Options◆Looped Uniform 🎭🅐🅓", + "ADE_ViewsOnlyContextOptions": "Context Options◆Views Only [VRAM⇈] 🎭🅐🅓", + "ADE_BatchedContextOptions": "Context Options◆Batched [Non-AD] 🎭🅐🅓", + "ADE_AnimateDiffUniformContextOptions": "Context Options◆Looped Uniform 🎭🅐🅓", # Legacy + # View Opts + "ADE_StandardStaticViewOptions": "View Options◆Standard Static 🎭🅐🅓", + "ADE_StandardUniformViewOptions": "View Options◆Standard Uniform 🎭🅐🅓", + "ADE_LoopedUniformViewOptions": "View Options◆Looped Uniform 🎭🅐🅓", + # Iteration Opts + "ADE_IterationOptsDefault": "Default Iteration Options 🎭🅐🅓", + "ADE_IterationOptsFreeInit": "FreeInit Iteration Options 🎭🅐🅓", + # Noise Layer Nodes + "ADE_NoiseLayerAdd": "Noise Layer [Add] 🎭🅐🅓", + "ADE_NoiseLayerAddWeighted": "Noise Layer [Add Weighted] 🎭🅐🅓", + "ADE_NoiseLayerReplace": "Noise Layer [Replace] 🎭🅐🅓", + # AnimateDiff Settings + "ADE_AnimateDiffSettings": "AnimateDiff Settings 🎭🅐🅓", + "ADE_AdjustPESweetspotStretch": "Adjust PE [Sweetspot Stretch] 🎭🅐🅓", + "ADE_AdjustPEFullStretch": "Adjust PE [Full Stretch] 🎭🅐🅓", + "ADE_AdjustPEManual": "Adjust PE [Manual] 🎭🅐🅓", + # Sample Settings + "ADE_CustomCFG": "Custom CFG 🎭🅐🅓", + "ADE_CustomCFGKeyframe": "Custom CFG Keyframe 🎭🅐🅓", + "ADE_SigmaSchedule": "Create Sigma Schedule 🎭🅐🅓", + "ADE_RawSigmaSchedule": "Create Raw Sigma Schedule 🎭🅐🅓", + "ADE_SigmaScheduleWeightedAverage": "Sigma Schedule Weighted Mean 🎭🅐🅓", + "ADE_SigmaScheduleWeightedAverageInterp": "Sigma Schedule Interpolated Mean 🎭🅐🅓", + "ADE_SigmaScheduleSplitAndCombine": "Sigma Schedule Split Combine 🎭🅐🅓", + # Extras Nodes + "ADE_AnimateDiffUnload": "AnimateDiff Unload 🎭🅐🅓", + "ADE_EmptyLatentImageLarge": "Empty Latent Image (Big Batch) 🎭🅐🅓", + "CheckpointLoaderSimpleWithNoiseSelect": "Load Checkpoint w/ Noise Select 🎭🅐🅓", + # Gen1 Nodes + "ADE_AnimateDiffLoaderGen1": "AnimateDiff Loader 🎭🅐🅓①", + "ADE_AnimateDiffLoaderWithContext": "AnimateDiff Loader [Legacy] 🎭🅐🅓①", + "ADE_AnimateDiffModelSettings_Release": "[DEPR] Motion Model Settings 🎭🅐🅓①", + "ADE_AnimateDiffModelSettingsSimple": "[DEPR] Motion Model Settings (Simple) 🎭🅐🅓①", + "ADE_AnimateDiffModelSettings": "[DEPR] Motion Model Settings (Advanced) 🎭🅐🅓①", + "ADE_AnimateDiffModelSettingsAdvancedAttnStrengths": "[DEPR] Motion Model Settings (Adv. Attn) 🎭🅐🅓①", + # Gen2 Nodes + "ADE_UseEvolvedSampling": "Use Evolved Sampling 🎭🅐🅓②", + "ADE_ApplyAnimateDiffModelSimple": "Apply AnimateDiff Model 🎭🅐🅓②", + "ADE_ApplyAnimateDiffModel": "Apply AnimateDiff Model (Adv.) 🎭🅐🅓②", + "ADE_LoadAnimateDiffModel": "Load AnimateDiff Model 🎭🅐🅓②", + # MaskedLoraLoader + #"ADE_MaskedLoadLora": "Load LoRA (Masked) 🎭🅐🅓", + # Deprecated Nodes + "AnimateDiffLoaderV1": "AnimateDiff Loader [DEPRECATED] 🎭🅐🅓", + "ADE_AnimateDiffLoaderV1Advanced": "AnimateDiff Loader (Advanced) [DEPRECATED] 🎭🅐🅓", + "ADE_AnimateDiffCombine": "AnimateDiff Combine [DEPRECATED, Use Video Combine (VHS) Instead!] 🎭🅐🅓", +} diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_ad_settings.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_ad_settings.py new file mode 100644 index 0000000000000000000000000000000000000000..8f575ce0bd1b03e67d85f7ef431598993a717d89 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_ad_settings.py @@ -0,0 +1,107 @@ +from .ad_settings import AdjustPE, AdjustPEGroup, AnimateDiffSettings +from .utils_model import BIGMAX + + +class AnimateDiffSettingsNode: + @classmethod + def INPUT_TYPES(s): + return { + "optional": { + "pe_adjust": ("PE_ADJUST",), + } + } + + RETURN_TYPES = ("AD_SETTINGS",) + CATEGORY = "Animate Diff 🎭🅐🅓/ad settings" + FUNCTION = "get_ad_settings" + + def get_ad_settings(self, pe_adjust: AdjustPEGroup=None): + return (AnimateDiffSettings(adjust_pe=pe_adjust),) + + +class ManualAdjustPENode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "cap_initial_pe_length": ("INT", {"default": 0, "min": 0, "step": 1}), + "interpolate_pe_to_length": ("INT", {"default": 0, "min": 0, "step": 1}), + "initial_pe_idx_offset": ("INT", {"default": 0, "min": 0, "step": 1}), + "final_pe_idx_offset": ("INT", {"default": 0, "min": 0, "step": 1}), + "print_adjustment": ("BOOLEAN", {"default": False}), + }, + "optional": { + "prev_pe_adjust": ("PE_ADJUST",), + } + } + + RETURN_TYPES = ("PE_ADJUST",) + CATEGORY = "Animate Diff 🎭🅐🅓/ad settings/pe adjust" + FUNCTION = "get_pe_adjust" + + def get_pe_adjust(self, cap_initial_pe_length: int, interpolate_pe_to_length: int, + initial_pe_idx_offset: int, final_pe_idx_offset: int, print_adjustment: bool, + prev_pe_adjust: AdjustPEGroup=None): + if prev_pe_adjust is None: + prev_pe_adjust = AdjustPEGroup() + prev_pe_adjust = prev_pe_adjust.clone() + adjust = AdjustPE(cap_initial_pe_length=cap_initial_pe_length, interpolate_pe_to_length=interpolate_pe_to_length, + initial_pe_idx_offset=initial_pe_idx_offset, final_pe_idx_offset=final_pe_idx_offset, + print_adjustment=print_adjustment) + prev_pe_adjust.add(adjust) + return (prev_pe_adjust,) + + +class SweetspotStretchPENode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "sweetspot": ("INT", {"default": 16, "min": 0, "max": BIGMAX},), + "new_sweetspot": ("INT", {"default": 16, "min": 0, "max": BIGMAX},), + "print_adjustment": ("BOOLEAN", {"default": False}), + }, + "optional": { + "prev_pe_adjust": ("PE_ADJUST",), + } + } + + RETURN_TYPES = ("PE_ADJUST",) + CATEGORY = "Animate Diff 🎭🅐🅓/ad settings/pe adjust" + FUNCTION = "get_pe_adjust" + + def get_pe_adjust(self, sweetspot: int, new_sweetspot: int, print_adjustment: bool, prev_pe_adjust: AdjustPEGroup=None): + if prev_pe_adjust is None: + prev_pe_adjust = AdjustPEGroup() + prev_pe_adjust = prev_pe_adjust.clone() + adjust = AdjustPE(cap_initial_pe_length=sweetspot, interpolate_pe_to_length=new_sweetspot, + print_adjustment=print_adjustment) + prev_pe_adjust.add(adjust) + return (prev_pe_adjust,) + + +class FullStretchPENode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "pe_stretch": ("INT", {"default": 0, "min": 0, "max": BIGMAX},), + "print_adjustment": ("BOOLEAN", {"default": False}), + }, + "optional": { + "prev_pe_adjust": ("PE_ADJUST",), + } + } + + RETURN_TYPES = ("PE_ADJUST",) + CATEGORY = "Animate Diff 🎭🅐🅓/ad settings/pe adjust" + FUNCTION = "get_pe_adjust" + + def get_pe_adjust(self, pe_stretch: int, print_adjustment: bool, prev_pe_adjust: AdjustPEGroup=None): + if prev_pe_adjust is None: + prev_pe_adjust = AdjustPEGroup() + prev_pe_adjust = prev_pe_adjust.clone() + adjust = AdjustPE(motion_pe_stretch=pe_stretch, + print_adjustment=print_adjustment) + prev_pe_adjust.add(adjust) + return (prev_pe_adjust,) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_context.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_context.py new file mode 100644 index 0000000000000000000000000000000000000000..fd86d0ff2db3cdf7fd221cab0ac3238286b635b6 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_context.py @@ -0,0 +1,347 @@ +from .context import ContextFuseMethod, ContextOptions, ContextOptionsGroup, ContextSchedules +from .utils_model import BIGMAX + + +LENGTH_MAX = 128 # keep an eye on these max values; +STRIDE_MAX = 32 # would need to be updated +OVERLAP_MAX = 128 # if new motion modules come out + + +class LoopedUniformContextOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "context_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "context_stride": ("INT", {"default": 1, "min": 1, "max": STRIDE_MAX}), + "context_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + "closed_loop": ("BOOLEAN", {"default": False},), + #"sync_context_to_pe": ("BOOLEAN", {"default": False},), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST,), + "use_on_equal_length": ("BOOLEAN", {"default": False},), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + "prev_context": ("CONTEXT_OPTIONS",), + "view_opts": ("VIEW_OPTS",), + } + } + + RETURN_TYPES = ("CONTEXT_OPTIONS",) + RETURN_NAMES = ("CONTEXT_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts" + FUNCTION = "create_options" + + def create_options(self, context_length: int, context_stride: int, context_overlap: int, closed_loop: bool, + fuse_method: str=ContextFuseMethod.FLAT, use_on_equal_length=False, start_percent: float=0.0, guarantee_steps: int=1, + view_opts: ContextOptions=None, prev_context: ContextOptionsGroup=None): + if prev_context is None: + prev_context = ContextOptionsGroup() + prev_context = prev_context.clone() + + context_options = ContextOptions( + context_length=context_length, + context_stride=context_stride, + context_overlap=context_overlap, + context_schedule=ContextSchedules.UNIFORM_LOOPED, + closed_loop=closed_loop, + fuse_method=fuse_method, + use_on_equal_length=use_on_equal_length, + start_percent=start_percent, + guarantee_steps=guarantee_steps, + view_options=view_opts, + ) + #context_options.set_sync_context_to_pe(sync_context_to_pe) + prev_context.add(context_options) + return (prev_context,) + + +# This Legacy version exists to maintain compatiblity with old workflows +class LegacyLoopedUniformContextOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "context_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "context_stride": ("INT", {"default": 1, "min": 1, "max": STRIDE_MAX}), + "context_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + "context_schedule": (ContextSchedules.LEGACY_UNIFORM_SCHEDULE_LIST,), + "closed_loop": ("BOOLEAN", {"default": False},), + #"sync_context_to_pe": ("BOOLEAN", {"default": False},), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST, {"default": ContextFuseMethod.FLAT}), + "use_on_equal_length": ("BOOLEAN", {"default": False},), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + "prev_context": ("CONTEXT_OPTIONS",), + "view_opts": ("VIEW_OPTS",), + } + } + + RETURN_TYPES = ("CONTEXT_OPTIONS",) + RETURN_NAMES = ("CONTEXT_OPTS",) + CATEGORY = "" # No Category, so will not appear in menu + FUNCTION = "create_options" + + def create_options(self, fuse_method: str=ContextFuseMethod.FLAT, context_schedule: str=None, **kwargs): + return LoopedUniformContextOptionsNode.create_options(self, fuse_method=fuse_method, **kwargs) + + +class StandardUniformContextOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "context_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "context_stride": ("INT", {"default": 1, "min": 1, "max": STRIDE_MAX}), + "context_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST,), + "use_on_equal_length": ("BOOLEAN", {"default": False},), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + "prev_context": ("CONTEXT_OPTIONS",), + "view_opts": ("VIEW_OPTS",), + } + } + + RETURN_TYPES = ("CONTEXT_OPTIONS",) + RETURN_NAMES = ("CONTEXT_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts" + FUNCTION = "create_options" + + def create_options(self, context_length: int, context_stride: int, context_overlap: int, + fuse_method: str=ContextFuseMethod.PYRAMID, use_on_equal_length=False, start_percent: float=0.0, guarantee_steps: int=1, + view_opts: ContextOptions=None, prev_context: ContextOptionsGroup=None): + if prev_context is None: + prev_context = ContextOptionsGroup() + prev_context = prev_context.clone() + + context_options = ContextOptions( + context_length=context_length, + context_stride=context_stride, + context_overlap=context_overlap, + context_schedule=ContextSchedules.UNIFORM_STANDARD, + closed_loop=False, + fuse_method=fuse_method, + use_on_equal_length=use_on_equal_length, + start_percent=start_percent, + guarantee_steps=guarantee_steps, + view_options=view_opts, + ) + prev_context.add(context_options) + return (prev_context,) + + +class StandardStaticContextOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "context_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "context_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST_STATIC,), + "use_on_equal_length": ("BOOLEAN", {"default": False},), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + "prev_context": ("CONTEXT_OPTIONS",), + "view_opts": ("VIEW_OPTS",), + } + } + + RETURN_TYPES = ("CONTEXT_OPTIONS",) + RETURN_NAMES = ("CONTEXT_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts" + FUNCTION = "create_options" + + def create_options(self, context_length: int, context_overlap: int, + fuse_method: str=ContextFuseMethod.PYRAMID, use_on_equal_length=False, start_percent: float=0.0, guarantee_steps: int=1, + view_opts: ContextOptions=None, prev_context: ContextOptionsGroup=None): + if prev_context is None: + prev_context = ContextOptionsGroup() + prev_context = prev_context.clone() + + context_options = ContextOptions( + context_length=context_length, + context_stride=None, + context_overlap=context_overlap, + context_schedule=ContextSchedules.STATIC_STANDARD, + fuse_method=fuse_method, + use_on_equal_length=use_on_equal_length, + start_percent=start_percent, + guarantee_steps=guarantee_steps, + view_options=view_opts, + ) + prev_context.add(context_options) + return (prev_context,) + + +class BatchedContextOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "context_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + }, + "optional": { + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + "prev_context": ("CONTEXT_OPTIONS",), + } + } + + RETURN_TYPES = ("CONTEXT_OPTIONS",) + RETURN_NAMES = ("CONTEXT_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts" + FUNCTION = "create_options" + + def create_options(self, context_length: int, start_percent: float=0.0, guarantee_steps: int=1, + prev_context: ContextOptionsGroup=None): + if prev_context is None: + prev_context = ContextOptionsGroup() + prev_context = prev_context.clone() + + context_options = ContextOptions( + context_length=context_length, + context_overlap=0, + context_schedule=ContextSchedules.BATCHED, + start_percent=start_percent, + guarantee_steps=guarantee_steps, + ) + prev_context.add(context_options) + return (prev_context,) + + +class ViewAsContextOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "view_opts_req": ("VIEW_OPTS",), + }, + "optional": { + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + "prev_context": ("CONTEXT_OPTIONS",), + } + } + + RETURN_TYPES = ("CONTEXT_OPTIONS",) + RETURN_NAMES = ("CONTEXT_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts" + FUNCTION = "create_options" + + def create_options(self, view_opts_req: ContextOptions, start_percent: float=0.0, guarantee_steps: int=1, + prev_context: ContextOptionsGroup=None): + if prev_context is None: + prev_context = ContextOptionsGroup() + prev_context = prev_context.clone() + context_options = ContextOptions( + context_schedule=ContextSchedules.VIEW_AS_CONTEXT, + start_percent=start_percent, + guarantee_steps=guarantee_steps, + view_options=view_opts_req, + use_on_equal_length=True + ) + prev_context.add(context_options) + return (prev_context,) + + +######################### +# View Options +class StandardStaticViewOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "view_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "view_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST,), + } + } + + RETURN_TYPES = ("VIEW_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts/view opts" + FUNCTION = "create_options" + + def create_options(self, view_length: int, view_overlap: int, + fuse_method: str=ContextFuseMethod.FLAT,): + view_options = ContextOptions( + context_length=view_length, + context_stride=None, + context_overlap=view_overlap, + context_schedule=ContextSchedules.STATIC_STANDARD, + fuse_method=fuse_method, + ) + return (view_options,) + + +class StandardUniformViewOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "view_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "view_stride": ("INT", {"default": 1, "min": 1, "max": STRIDE_MAX}), + "view_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST,), + } + } + + RETURN_TYPES = ("VIEW_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts/view opts" + FUNCTION = "create_options" + + def create_options(self, view_length: int, view_overlap: int, view_stride: int, + fuse_method: str=ContextFuseMethod.PYRAMID,): + view_options = ContextOptions( + context_length=view_length, + context_stride=view_stride, + context_overlap=view_overlap, + context_schedule=ContextSchedules.UNIFORM_STANDARD, + fuse_method=fuse_method, + ) + return (view_options,) + + +class LoopedUniformViewOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "view_length": ("INT", {"default": 16, "min": 1, "max": LENGTH_MAX}), + "view_stride": ("INT", {"default": 1, "min": 1, "max": STRIDE_MAX}), + "view_overlap": ("INT", {"default": 4, "min": 0, "max": OVERLAP_MAX}), + "closed_loop": ("BOOLEAN", {"default": False},), + }, + "optional": { + "fuse_method": (ContextFuseMethod.LIST,), + "use_on_equal_length": ("BOOLEAN", {"default": False},), + } + } + + RETURN_TYPES = ("VIEW_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/context opts/view opts" + FUNCTION = "create_options" + + def create_options(self, view_length: int, view_overlap: int, view_stride: int, closed_loop: bool, + fuse_method: str=ContextFuseMethod.PYRAMID, use_on_equal_length=False): + view_options = ContextOptions( + context_length=view_length, + context_stride=view_stride, + context_overlap=view_overlap, + context_schedule=ContextSchedules.UNIFORM_LOOPED, + closed_loop=closed_loop, + fuse_method=fuse_method, + use_on_equal_length=use_on_equal_length, + ) + return (view_options,) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_deprecated.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_deprecated.py new file mode 100644 index 0000000000000000000000000000000000000000..ecf88a5a20f4e5baaf4ae6c271060604df2b564a --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_deprecated.py @@ -0,0 +1,277 @@ +import json +import os +import shutil +import subprocess +from typing import Dict, List + +import numpy as np +import torch +from PIL import Image +from PIL.PngImagePlugin import PngInfo + +import folder_paths +from comfy.model_patcher import ModelPatcher + +from .context import ContextOptionsGroup, ContextOptions, ContextSchedules +from .logger import logger +from .utils_model import Folders, BetaSchedules, get_available_motion_models +from .model_injection import ModelPatcherAndInjector, InjectionParams, MotionModelGroup, load_motion_module_gen1 + + +class AnimateDiffLoader_Deprecated: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": ("MODEL",), + "latents": ("LATENT",), + "model_name": (get_available_motion_models(),), + "unlimited_area_hack": ("BOOLEAN", {"default": False},), + "beta_schedule": (BetaSchedules.get_alias_list_with_first_element(BetaSchedules.SQRT_LINEAR),), + }, + } + + RETURN_TYPES = ("MODEL", "LATENT") + CATEGORY = "" + FUNCTION = "load_mm_and_inject_params" + + def load_mm_and_inject_params( + self, + model: ModelPatcher, + latents: Dict[str, torch.Tensor], + model_name: str, unlimited_area_hack: bool, beta_schedule: str, + ): + # load motion module + motion_model = load_motion_module_gen1(model_name, model) + # get total frames + init_frames_len = len(latents["samples"]) # deprecated - no longer used for anything lol + # set injection params + params = InjectionParams( + unlimited_area_hack=unlimited_area_hack, + apply_mm_groupnorm_hack=True, + model_name=model_name, + apply_v2_properly=False, + ) + # inject for use in sampling code + model = ModelPatcherAndInjector(model) + model.motion_models = MotionModelGroup(motion_model) + model.motion_injection_params = params + + # save model sampling from BetaSchedule as object patch + # if autoselect, get suggested beta_schedule from motion model + if beta_schedule == BetaSchedules.AUTOSELECT and not model.motion_models.is_empty(): + beta_schedule = model.motion_models[0].model.get_best_beta_schedule(log=True) + new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) + if new_model_sampling is not None: + model.add_object_patch("model_sampling", new_model_sampling) + + del motion_model + return (model, latents) + + +class AnimateDiffLoaderAdvanced_Deprecated: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": ("MODEL",), + "latents": ("LATENT",), + "model_name": (get_available_motion_models(),), + "unlimited_area_hack": ("BOOLEAN", {"default": False},), + "context_length": ("INT", {"default": 16, "min": 0, "max": 1000}), + "context_stride": ("INT", {"default": 1, "min": 1, "max": 1000}), + "context_overlap": ("INT", {"default": 4, "min": 0, "max": 1000}), + "context_schedule": (ContextSchedules.LEGACY_UNIFORM_SCHEDULE_LIST,), + "closed_loop": ("BOOLEAN", {"default": False},), + "beta_schedule": (BetaSchedules.get_alias_list_with_first_element(BetaSchedules.SQRT_LINEAR),), + }, + } + + RETURN_TYPES = ("MODEL", "LATENT") + CATEGORY = "" + FUNCTION = "load_mm_and_inject_params" + + def load_mm_and_inject_params(self, + model: ModelPatcher, + latents: Dict[str, torch.Tensor], + model_name: str, unlimited_area_hack: bool, + context_length: int, context_stride: int, context_overlap: int, context_schedule: str, closed_loop: bool, + beta_schedule: str, + ): + # load motion module + motion_model = load_motion_module_gen1(model_name, model) + # get total frames + init_frames_len = len(latents["samples"]) # deprecated - no longer used for anything lol + # set injection params + params = InjectionParams( + unlimited_area_hack=unlimited_area_hack, + apply_mm_groupnorm_hack=True, + model_name=model_name, + apply_v2_properly=False, + ) + context_group = ContextOptionsGroup() + context_group.add( + ContextOptions( + context_length=context_length, + context_stride=context_stride, + context_overlap=context_overlap, + context_schedule=context_schedule, + closed_loop=closed_loop, + ) + ) + # set context settings + params.set_context(context_options=context_group) + # inject for use in sampling code + model = ModelPatcherAndInjector(model) + model.motion_models = MotionModelGroup(motion_model) + model.motion_injection_params = params + + # save model sampling from BetaSchedule as object patch + # if autoselect, get suggested beta_schedule from motion model + if beta_schedule == BetaSchedules.AUTOSELECT and not model.motion_models.is_empty(): + beta_schedule = model.motion_models[0].model.get_best_beta_schedule(log=True) + new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) + if new_model_sampling is not None: + model.add_object_patch("model_sampling", new_model_sampling) + + del motion_model + return (model, latents) + + +class AnimateDiffCombine_Deprecated: + ffmpeg_warning_already_shown = False + @classmethod + def INPUT_TYPES(s): + ffmpeg_path = shutil.which("ffmpeg") + #Hide ffmpeg formats if ffmpeg isn't available + if ffmpeg_path is not None: + ffmpeg_formats = ["video/"+x[:-5] for x in folder_paths.get_filename_list(Folders.VIDEO_FORMATS)] + else: + ffmpeg_formats = [] + if not s.ffmpeg_warning_already_shown: + # Deprecated node are now hidden, so no need to show warning unless node is used. + # logger.warning("This warning can be ignored, you should not be using the deprecated AnimateDiff Combine node anyway. If you are, use Video Combine from ComfyUI-VideoHelperSuite instead. ffmpeg could not be found. Outputs that require it have been disabled") + s.ffmpeg_warning_already_shown = True + return { + "required": { + "images": ("IMAGE",), + "frame_rate": ( + "INT", + {"default": 8, "min": 1, "max": 24, "step": 1}, + ), + "loop_count": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + "filename_prefix": ("STRING", {"default": "AnimateDiff"}), + "format": (["image/gif", "image/webp"] + ffmpeg_formats,), + "pingpong": ("BOOLEAN", {"default": False}), + "save_image": ("BOOLEAN", {"default": True}), + }, + "hidden": { + "prompt": "PROMPT", + "extra_pnginfo": "EXTRA_PNGINFO", + }, + } + + RETURN_TYPES = ("GIF",) + OUTPUT_NODE = True + CATEGORY = "" + FUNCTION = "generate_gif" + + def generate_gif( + self, + images, + frame_rate: int, + loop_count: int, + filename_prefix="AnimateDiff", + format="image/gif", + pingpong=False, + save_image=True, + prompt=None, + extra_pnginfo=None, + ): + logger.warning("Do not use AnimateDiff Combine node, it is deprecated. Use Video Combine node from ComfyUI-VideoHelperSuite instead. Video nodes from VideoHelperSuite are actively maintained, more feature-rich, and also automatically attempts to get ffmpeg.") + # convert images to numpy + frames: List[Image.Image] = [] + for image in images: + img = 255.0 * image.cpu().numpy() + img = Image.fromarray(np.clip(img, 0, 255).astype(np.uint8)) + frames.append(img) + + # get output information + output_dir = ( + folder_paths.get_output_directory() + if save_image + else folder_paths.get_temp_directory() + ) + ( + full_output_folder, + filename, + counter, + subfolder, + _, + ) = folder_paths.get_save_image_path(filename_prefix, output_dir) + + metadata = PngInfo() + if prompt is not None: + metadata.add_text("prompt", json.dumps(prompt)) + if extra_pnginfo is not None: + for x in extra_pnginfo: + metadata.add_text(x, json.dumps(extra_pnginfo[x])) + + # save first frame as png to keep metadata + file = f"{filename}_{counter:05}_.png" + file_path = os.path.join(full_output_folder, file) + frames[0].save( + file_path, + pnginfo=metadata, + compress_level=4, + ) + if pingpong: + frames = frames + frames[-2:0:-1] + + format_type, format_ext = format.split("/") + file = f"{filename}_{counter:05}_.{format_ext}" + file_path = os.path.join(full_output_folder, file) + if format_type == "image": + # Use pillow directly to save an animated image + frames[0].save( + file_path, + format=format_ext.upper(), + save_all=True, + append_images=frames[1:], + duration=round(1000 / frame_rate), + loop=loop_count, + compress_level=4, + ) + else: + # Use ffmpeg to save a video + ffmpeg_path = shutil.which("ffmpeg") + if ffmpeg_path is None: + #Should never be reachable + raise ProcessLookupError("Could not find ffmpeg") + + video_format_path = folder_paths.get_full_path("video_formats", format_ext + ".json") + with open(video_format_path, 'r') as stream: + video_format = json.load(stream) + file = f"{filename}_{counter:05}_.{video_format['extension']}" + file_path = os.path.join(full_output_folder, file) + dimensions = f"{frames[0].width}x{frames[0].height}" + args = [ffmpeg_path, "-v", "error", "-f", "rawvideo", "-pix_fmt", "rgb24", + "-s", dimensions, "-r", str(frame_rate), "-i", "-"] \ + + video_format['main_pass'] + [file_path] + + env=os.environ.copy() + if "environment" in video_format: + env.update(video_format["environment"]) + with subprocess.Popen(args, stdin=subprocess.PIPE, env=env) as proc: + for frame in frames: + proc.stdin.write(frame.tobytes()) + + previews = [ + { + "filename": file, + "subfolder": subfolder, + "type": "output" if save_image else "temp", + "format": format, + } + ] + return {"ui": {"gifs": previews}} diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_extras.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_extras.py new file mode 100644 index 0000000000000000000000000000000000000000..9b225347b773fd76ac5c956b2b41b67e75766154 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_extras.py @@ -0,0 +1,78 @@ +import torch + +import folder_paths +import nodes as comfy_nodes +from comfy.model_patcher import ModelPatcher +from comfy.sd import load_checkpoint_guess_config + +from .logger import logger +from .utils_model import BetaSchedules +from .model_injection import get_vanilla_model_patcher + + +class AnimateDiffUnload: + def __init__(self) -> None: + pass + + @classmethod + def INPUT_TYPES(s): + return {"required": {"model": ("MODEL",)}} + + RETURN_TYPES = ("MODEL",) + CATEGORY = "Animate Diff 🎭🅐🅓/extras" + FUNCTION = "unload_motion_modules" + + def unload_motion_modules(self, model: ModelPatcher): + # return model clone with ejected params + #model = eject_params_from_model(model) + model = get_vanilla_model_patcher(model) + return (model.clone(),) + + +class CheckpointLoaderSimpleWithNoiseSelect: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "ckpt_name": (folder_paths.get_filename_list("checkpoints"), ), + "beta_schedule": (BetaSchedules.ALIAS_LIST, {"default": BetaSchedules.USE_EXISTING}, ) + }, + "optional": { + "use_custom_scale_factor": ("BOOLEAN", {"default": False}), + "scale_factor": ("FLOAT", {"default": 0.18215, "min": 0.0, "max": 1.0, "step": 0.00001}) + } + } + RETURN_TYPES = ("MODEL", "CLIP", "VAE") + FUNCTION = "load_checkpoint" + + CATEGORY = "Animate Diff 🎭🅐🅓/extras" + + def load_checkpoint(self, ckpt_name, beta_schedule, output_vae=True, output_clip=True, use_custom_scale_factor=False, scale_factor=0.18215): + ckpt_path = folder_paths.get_full_path("checkpoints", ckpt_name) + out = load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) + # register chosen beta schedule on model - convert to beta_schedule name recognized by ComfyUI + new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, out[0]) + if new_model_sampling is not None: + out[0].model.model_sampling = new_model_sampling + if use_custom_scale_factor: + out[0].model.latent_format.scale_factor = scale_factor + return out + + +class EmptyLatentImageLarge: + def __init__(self, device="cpu"): + self.device = device + + @classmethod + def INPUT_TYPES(s): + return {"required": { "width": ("INT", {"default": 512, "min": 64, "max": comfy_nodes.MAX_RESOLUTION, "step": 8}), + "height": ("INT", {"default": 512, "min": 64, "max": comfy_nodes.MAX_RESOLUTION, "step": 8}), + "batch_size": ("INT", {"default": 1, "min": 1, "max": 262144})}} + RETURN_TYPES = ("LATENT",) + FUNCTION = "generate" + + CATEGORY = "Animate Diff 🎭🅐🅓/extras" + + def generate(self, width, height, batch_size=1): + latent = torch.zeros([batch_size, 4, height // 8, width // 8]) + return ({"samples":latent}, ) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen1.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen1.py new file mode 100644 index 0000000000000000000000000000000000000000..68f4c95ec6d265d282849042d5f26d7873299909 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen1.py @@ -0,0 +1,340 @@ +from pathlib import Path +import torch + +import comfy.sample as comfy_sample +from comfy.model_patcher import ModelPatcher + +from .ad_settings import AdjustPEGroup, AnimateDiffSettings, AdjustPE +from .context import ContextOptions, ContextOptionsGroup, ContextSchedules +from .logger import logger +from .utils_model import BetaSchedules, get_available_motion_loras, get_available_motion_models, get_motion_lora_path +from .utils_motion import ADKeyframeGroup, get_combined_multival +from .motion_lora import MotionLoraInfo, MotionLoraList +from .model_injection import InjectionParams, ModelPatcherAndInjector, MotionModelGroup, load_motion_lora_as_patches, load_motion_module_gen1, load_motion_module_gen2, validate_model_compatibility_gen2 +from .sample_settings import SampleSettings, SeedNoiseGeneration +from .sampling import motion_sample_factory + + +class AnimateDiffLoaderGen1: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": ("MODEL",), + "model_name": (get_available_motion_models(),), + "beta_schedule": (BetaSchedules.ALIAS_LIST, {"default": BetaSchedules.AUTOSELECT}), + #"apply_mm_groupnorm_hack": ("BOOLEAN", {"default": True}), + }, + "optional": { + "context_options": ("CONTEXT_OPTIONS",), + "motion_lora": ("MOTION_LORA",), + "ad_settings": ("AD_SETTINGS",), + "ad_keyframes": ("AD_KEYFRAMES",), + "sample_settings": ("SAMPLE_SETTINGS",), + "scale_multival": ("MULTIVAL",), + "effect_multival": ("MULTIVAL",), + } + } + + RETURN_TYPES = ("MODEL",) + CATEGORY = "Animate Diff 🎭🅐🅓/① Gen1 nodes ①" + FUNCTION = "load_mm_and_inject_params" + + def load_mm_and_inject_params(self, + model: ModelPatcher, + model_name: str, beta_schedule: str,# apply_mm_groupnorm_hack: bool, + context_options: ContextOptionsGroup=None, motion_lora: MotionLoraList=None, ad_settings: AnimateDiffSettings=None, + sample_settings: SampleSettings=None, scale_multival=None, effect_multival=None, ad_keyframes: ADKeyframeGroup=None, + ): + # load motion module and motion settings, if included + motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings) + # confirm that it is compatible with SD model + validate_model_compatibility_gen2(model=model, motion_model=motion_model) + # apply motion model to loaded_mm + if motion_lora is not None: + for lora in motion_lora.loras: + load_motion_lora_as_patches(motion_model, lora) + motion_model.scale_multival = scale_multival + motion_model.effect_multival = effect_multival + motion_model.keyframes = ad_keyframes.clone() if ad_keyframes else ADKeyframeGroup() + + # create injection params + params = InjectionParams(unlimited_area_hack=False, model_name=motion_model.model.mm_info.mm_name) + # apply context options + if context_options: + params.set_context(context_options) + + # set motion_scale and motion_model_settings + if not ad_settings: + ad_settings = AnimateDiffSettings() + ad_settings.attn_scale = 1.0 + params.set_motion_model_settings(ad_settings) + + # backwards compatibility to support old way of masking scale + if params.motion_model_settings.mask_attn_scale is not None: + motion_model.scale_multival = get_combined_multival(scale_multival, (params.motion_model_settings.mask_attn_scale * params.motion_model_settings.attn_scale)) + + # need to use a ModelPatcher that supports injection of motion modules into unet + # need to use a ModelPatcher that supports injection of motion modules into unet + model = ModelPatcherAndInjector(model) + model.motion_models = MotionModelGroup(motion_model) + model.sample_settings = sample_settings if sample_settings is not None else SampleSettings() + model.motion_injection_params = params + + if model.sample_settings.custom_cfg is not None: + logger.info("[Sample Settings] custom_cfg is set; will override any KSampler cfg values or patches.") + + if model.sample_settings.sigma_schedule is not None: + logger.info("[Sample Settings] sigma_schedule is set; will override beta_schedule.") + model.add_object_patch("model_sampling", model.sample_settings.sigma_schedule.clone().model_sampling) + else: + # save model sampling from BetaSchedule as object patch + # if autoselect, get suggested beta_schedule from motion model + if beta_schedule == BetaSchedules.AUTOSELECT and not model.motion_models.is_empty(): + beta_schedule = model.motion_models[0].model.get_best_beta_schedule(log=True) + new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) + if new_model_sampling is not None: + model.add_object_patch("model_sampling", new_model_sampling) + + del motion_model + return (model,) + + +class LegacyAnimateDiffLoaderWithContext: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": ("MODEL",), + "model_name": (get_available_motion_models(),), + "beta_schedule": (BetaSchedules.ALIAS_LIST, {"default": BetaSchedules.AUTOSELECT}), + #"apply_mm_groupnorm_hack": ("BOOLEAN", {"default": True}), + }, + "optional": { + "context_options": ("CONTEXT_OPTIONS",), + "motion_lora": ("MOTION_LORA",), + "ad_settings": ("AD_SETTINGS",), + "sample_settings": ("SAMPLE_SETTINGS",), + "motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + "apply_v2_models_properly": ("BOOLEAN", {"default": True}), + "ad_keyframes": ("AD_KEYFRAMES",), + } + } + + RETURN_TYPES = ("MODEL",) + CATEGORY = "Animate Diff 🎭🅐🅓/① Gen1 nodes ①" + FUNCTION = "load_mm_and_inject_params" + + + def load_mm_and_inject_params(self, + model: ModelPatcher, + model_name: str, beta_schedule: str,# apply_mm_groupnorm_hack: bool, + context_options: ContextOptionsGroup=None, motion_lora: MotionLoraList=None, ad_settings: AnimateDiffSettings=None, motion_model_settings: AnimateDiffSettings=None, + sample_settings: SampleSettings=None, motion_scale: float=1.0, apply_v2_models_properly: bool=False, ad_keyframes: ADKeyframeGroup=None, + ): + if ad_settings is not None: + motion_model_settings = ad_settings + # load motion module + motion_model = load_motion_module_gen1(model_name, model, motion_lora=motion_lora, motion_model_settings=motion_model_settings) + # set injection params + params = InjectionParams( + unlimited_area_hack=False, + model_name=model_name, + apply_v2_properly=apply_v2_models_properly, + ) + if context_options: + params.set_context(context_options) + # set motion_scale and motion_model_settings + if not motion_model_settings: + motion_model_settings = AnimateDiffSettings() + motion_model_settings.attn_scale = motion_scale + params.set_motion_model_settings(motion_model_settings) + + if params.motion_model_settings.mask_attn_scale is not None: + motion_model.scale_multival = params.motion_model_settings.mask_attn_scale * params.motion_model_settings.attn_scale + else: + motion_model.scale_multival = params.motion_model_settings.attn_scale + + motion_model.keyframes = ad_keyframes.clone() if ad_keyframes else ADKeyframeGroup() + + model = ModelPatcherAndInjector(model) + model.motion_models = MotionModelGroup(motion_model) + model.sample_settings = sample_settings if sample_settings is not None else SampleSettings() + model.motion_injection_params = params + + # save model sampling from BetaSchedule as object patch + # if autoselect, get suggested beta_schedule from motion model + if beta_schedule == BetaSchedules.AUTOSELECT and not model.motion_models.is_empty(): + beta_schedule = model.motion_models[0].model.get_best_beta_schedule(log=True) + new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) + if new_model_sampling is not None: + model.add_object_patch("model_sampling", new_model_sampling) + + del motion_model + return (model,) + + +class AnimateDiffModelSettings: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "min_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + "max_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + }, + "optional": { + "mask_motion_scale": ("MASK",), + } + } + + RETURN_TYPES = ("AD_SETTINGS",) + CATEGORY = "" #"Animate Diff 🎭🅐🅓/① Gen1 nodes ①/motion settings" + FUNCTION = "get_motion_model_settings" + + def get_motion_model_settings(self, mask_motion_scale: torch.Tensor=None, min_motion_scale: float=1.0, max_motion_scale: float=1.0): + motion_model_settings = AnimateDiffSettings( + mask_attn_scale=mask_motion_scale, + mask_attn_scale_min=min_motion_scale, + mask_attn_scale_max=max_motion_scale, + ) + + return (motion_model_settings,) + + +class AnimateDiffModelSettingsSimple: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "motion_pe_stretch": ("INT", {"default": 0, "min": 0, "step": 1}), + }, + "optional": { + "mask_motion_scale": ("MASK",), + "min_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + "max_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + } + } + + RETURN_TYPES = ("AD_SETTINGS",) + CATEGORY = "" #"Animate Diff 🎭🅐🅓/① Gen1 nodes ①/motion settings/experimental" + FUNCTION = "get_motion_model_settings" + + def get_motion_model_settings(self, motion_pe_stretch: int, + mask_motion_scale: torch.Tensor=None, min_motion_scale: float=1.0, max_motion_scale: float=1.0): + adjust_pe = AdjustPEGroup(AdjustPE(motion_pe_stretch=motion_pe_stretch)) + motion_model_settings = AnimateDiffSettings( + adjust_pe=adjust_pe, + mask_attn_scale=mask_motion_scale, + mask_attn_scale_min=min_motion_scale, + mask_attn_scale_max=max_motion_scale, + ) + + return (motion_model_settings,) + + +class AnimateDiffModelSettingsAdvanced: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "pe_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "other_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "motion_pe_stretch": ("INT", {"default": 0, "min": 0, "step": 1}), + "cap_initial_pe_length": ("INT", {"default": 0, "min": 0, "step": 1}), + "interpolate_pe_to_length": ("INT", {"default": 0, "min": 0, "step": 1}), + "initial_pe_idx_offset": ("INT", {"default": 0, "min": 0, "step": 1}), + "final_pe_idx_offset": ("INT", {"default": 0, "min": 0, "step": 1}), + }, + "optional": { + "mask_motion_scale": ("MASK",), + "min_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + "max_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + } + } + + RETURN_TYPES = ("AD_SETTINGS",) + CATEGORY = "" #"Animate Diff 🎭🅐🅓/① Gen1 nodes ①/motion settings/experimental" + FUNCTION = "get_motion_model_settings" + + def get_motion_model_settings(self, pe_strength: float, attn_strength: float, other_strength: float, + motion_pe_stretch: int, + cap_initial_pe_length: int, interpolate_pe_to_length: int, + initial_pe_idx_offset: int, final_pe_idx_offset: int, + mask_motion_scale: torch.Tensor=None, min_motion_scale: float=1.0, max_motion_scale: float=1.0): + adjust_pe = AdjustPEGroup(AdjustPE(motion_pe_stretch=motion_pe_stretch, + cap_initial_pe_length=cap_initial_pe_length, interpolate_pe_to_length=interpolate_pe_to_length, + initial_pe_idx_offset=initial_pe_idx_offset, final_pe_idx_offset=final_pe_idx_offset)) + motion_model_settings = AnimateDiffSettings( + adjust_pe=adjust_pe, + pe_strength=pe_strength, + attn_strength=attn_strength, + other_strength=other_strength, + mask_attn_scale=mask_motion_scale, + mask_attn_scale_min=min_motion_scale, + mask_attn_scale_max=max_motion_scale, + ) + + return (motion_model_settings,) + + +class AnimateDiffModelSettingsAdvancedAttnStrengths: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "pe_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_q_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_k_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_v_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_out_weight_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "attn_out_bias_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "other_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.0001}), + "motion_pe_stretch": ("INT", {"default": 0, "min": 0, "step": 1}), + "cap_initial_pe_length": ("INT", {"default": 0, "min": 0, "step": 1}), + "interpolate_pe_to_length": ("INT", {"default": 0, "min": 0, "step": 1}), + "initial_pe_idx_offset": ("INT", {"default": 0, "min": 0, "step": 1}), + "final_pe_idx_offset": ("INT", {"default": 0, "min": 0, "step": 1}), + }, + "optional": { + "mask_motion_scale": ("MASK",), + "min_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + "max_motion_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + } + } + + RETURN_TYPES = ("AD_SETTINGS",) + CATEGORY = "" #"Animate Diff 🎭🅐🅓/① Gen1 nodes ①/motion settings/experimental" + FUNCTION = "get_motion_model_settings" + + def get_motion_model_settings(self, pe_strength: float, attn_strength: float, + attn_q_strength: float, + attn_k_strength: float, + attn_v_strength: float, + attn_out_weight_strength: float, + attn_out_bias_strength: float, + other_strength: float, + motion_pe_stretch: int, + cap_initial_pe_length: int, interpolate_pe_to_length: int, + initial_pe_idx_offset: int, final_pe_idx_offset: int, + mask_motion_scale: torch.Tensor=None, min_motion_scale: float=1.0, max_motion_scale: float=1.0): + adjust_pe = AdjustPEGroup(AdjustPE(motion_pe_stretch=motion_pe_stretch, + cap_initial_pe_length=cap_initial_pe_length, interpolate_pe_to_length=interpolate_pe_to_length, + initial_pe_idx_offset=initial_pe_idx_offset, final_pe_idx_offset=final_pe_idx_offset)) + motion_model_settings = AnimateDiffSettings( + adjust_pe=adjust_pe, + pe_strength=pe_strength, + attn_strength=attn_strength, + attn_q_strength=attn_q_strength, + attn_k_strength=attn_k_strength, + attn_v_strength=attn_v_strength, + attn_out_weight_strength=attn_out_weight_strength, + attn_out_bias_strength=attn_out_bias_strength, + other_strength=other_strength, + mask_attn_scale=mask_motion_scale, + mask_attn_scale_min=min_motion_scale, + mask_attn_scale_max=max_motion_scale, + ) + + return (motion_model_settings,) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen2.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen2.py new file mode 100644 index 0000000000000000000000000000000000000000..7754b4d4371d966e82f6c15fa7254b52720fb76e --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen2.py @@ -0,0 +1,212 @@ +from pathlib import Path +import torch + +import comfy.sample as comfy_sample +from comfy.model_patcher import ModelPatcher + +from .ad_settings import AnimateDiffSettings +from .context import ContextOptions, ContextOptionsGroup, ContextSchedules +from .logger import logger +from .utils_model import BIGMAX, BetaSchedules, get_available_motion_loras, get_available_motion_models, get_motion_lora_path +from .utils_motion import ADKeyframeGroup, ADKeyframe +from .motion_lora import MotionLoraInfo, MotionLoraList +from .model_injection import (InjectionParams, ModelPatcherAndInjector, MotionModelGroup, MotionModelPatcher, create_fresh_motion_module, + load_motion_module_gen1, load_motion_module_gen2, load_motion_lora_as_patches, validate_model_compatibility_gen2) +from .sample_settings import SampleSettings, SeedNoiseGeneration +from .sampling import motion_sample_factory + + +class UseEvolvedSamplingNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": ("MODEL",), + "beta_schedule": (BetaSchedules.ALIAS_LIST, {"default": BetaSchedules.AUTOSELECT}), + }, + "optional": { + "m_models": ("M_MODELS",), + "context_options": ("CONTEXT_OPTIONS",), + "sample_settings": ("SAMPLE_SETTINGS",), + #"beta_schedule_override": ("BETA_SCHEDULE",), + } + } + + RETURN_TYPES = ("MODEL",) + CATEGORY = "Animate Diff 🎭🅐🅓/② Gen2 nodes ②" + FUNCTION = "use_evolved_sampling" + + def use_evolved_sampling(self, model: ModelPatcher, beta_schedule: str, m_models: MotionModelGroup=None, context_options: ContextOptionsGroup=None, + sample_settings: SampleSettings=None, beta_schedule_override=None): + if m_models is not None: + m_models = m_models.clone() + # for each motion model, confirm that it is compatible with SD model + for motion_model in m_models.models: + validate_model_compatibility_gen2(model=model, motion_model=motion_model) + # create injection params + model_name_list = [motion_model.model.mm_info.mm_name for motion_model in m_models.models] + model_names = ",".join(model_name_list) + # TODO: check if any apply_v2_properly is set to False + params = InjectionParams(unlimited_area_hack=False, model_name=model_names) + else: + params = InjectionParams() + # apply context options + if context_options: + params.set_context(context_options) + # need to use a ModelPatcher that supports injection of motion modules into unet + model = ModelPatcherAndInjector(model) + model.motion_models = m_models + model.sample_settings = sample_settings if sample_settings is not None else SampleSettings() + model.motion_injection_params = params + + if model.sample_settings.custom_cfg is not None: + logger.info("[Sample Settings] custom_cfg is set; will override any KSampler cfg values or patches.") + + if model.sample_settings.sigma_schedule is not None: + logger.info("[Sample Settings] sigma_schedule is set; will override beta_schedule.") + model.add_object_patch("model_sampling", model.sample_settings.sigma_schedule.clone().model_sampling) + else: + # save model_sampling from BetaSchedule as object patch + # if autoselect, get suggested beta_schedule from motion model + if beta_schedule == BetaSchedules.AUTOSELECT and not model.motion_models.is_empty(): + beta_schedule = model.motion_models[0].model.get_best_beta_schedule(log=True) + new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) + if new_model_sampling is not None: + model.add_object_patch("model_sampling", new_model_sampling) + + del m_models + return (model,) + + +class ApplyAnimateDiffModelNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "motion_model": ("MOTION_MODEL_ADE",), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "end_percent": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.001}), + }, + "optional": { + "motion_lora": ("MOTION_LORA",), + "scale_multival": ("MULTIVAL",), + "effect_multival": ("MULTIVAL",), + "ad_keyframes": ("AD_KEYFRAMES",), + "prev_m_models": ("M_MODELS",), + } + } + + RETURN_TYPES = ("M_MODELS",) + CATEGORY = "Animate Diff 🎭🅐🅓/② Gen2 nodes ②" + FUNCTION = "apply_motion_model" + + def apply_motion_model(self, motion_model: MotionModelPatcher, start_percent: float=0.0, end_percent: float=1.0, + motion_lora: MotionLoraList=None, ad_keyframes: ADKeyframeGroup=None, + scale_multival=None, effect_multival=None, + prev_m_models: MotionModelGroup=None,): + # set up motion models list + if prev_m_models is None: + prev_m_models = MotionModelGroup() + prev_m_models = prev_m_models.clone() + motion_model = motion_model.clone() + # check if internal motion model already present in previous model - create new if so + for prev_model in prev_m_models.models: + if motion_model.model is prev_model.model: + # need to create new internal model based on same state_dict + motion_model = create_fresh_motion_module(motion_model) + # apply motion model to loaded_mm + if motion_lora is not None: + for lora in motion_lora.loras: + load_motion_lora_as_patches(motion_model, lora) + motion_model.scale_multival = scale_multival + motion_model.effect_multival = effect_multival + motion_model.keyframes = ad_keyframes.clone() if ad_keyframes else ADKeyframeGroup() + motion_model.timestep_percent_range = (start_percent, end_percent) + # add to beginning, so that after injection, it will be the earliest of prev_m_models to be run + prev_m_models.add_to_start(mm=motion_model) + return (prev_m_models,) + + +class ApplyAnimateDiffModelBasicNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "motion_model": ("MOTION_MODEL_ADE",), + }, + "optional": { + "motion_lora": ("MOTION_LORA",), + "scale_multival": ("MULTIVAL",), + "effect_multival": ("MULTIVAL",), + "ad_keyframes": ("AD_KEYFRAMES",), + } + } + + RETURN_TYPES = ("M_MODELS",) + CATEGORY = "Animate Diff 🎭🅐🅓/② Gen2 nodes ②" + FUNCTION = "apply_motion_model" + + def apply_motion_model(self, + motion_model: MotionModelPatcher, motion_lora: MotionLoraList=None, + scale_multival=None, effect_multival=None, ad_keyframes=None): + # just a subset of normal ApplyAnimateDiffModelNode inputs + return ApplyAnimateDiffModelNode.apply_motion_model(self, motion_model, motion_lora=motion_lora, + scale_multival=scale_multival, effect_multival=effect_multival, + ad_keyframes=ad_keyframes) + + +class LoadAnimateDiffModelNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model_name": (get_available_motion_models(),), + }, + "optional": { + "ad_settings": ("AD_SETTINGS",), + } + } + + RETURN_TYPES = ("MOTION_MODEL_ADE",) + RETURN_NAMES = ("MOTION_MODEL",) + CATEGORY = "Animate Diff 🎭🅐🅓/② Gen2 nodes ②" + FUNCTION = "load_motion_model" + + def load_motion_model(self, model_name: str, ad_settings: AnimateDiffSettings=None): + # load motion module and motion settings, if included + motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings) + return (motion_model,) + + +class ADKeyframeNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}, ), + }, + "optional": { + "prev_ad_keyframes": ("AD_KEYFRAMES", ), + "scale_multival": ("MULTIVAL",), + "effect_multival": ("MULTIVAL",), + "inherit_missing": ("BOOLEAN", {"default": True}, ), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + } + } + + RETURN_TYPES = ("AD_KEYFRAMES", ) + FUNCTION = "load_keyframe" + + CATEGORY = "Animate Diff 🎭🅐🅓" + + def load_keyframe(self, + start_percent: float, prev_ad_keyframes=None, + scale_multival: [float, torch.Tensor]=None, effect_multival: [float, torch.Tensor]=None, + inherit_missing: bool=True, guarantee_steps: int=1): + if not prev_ad_keyframes: + prev_ad_keyframes = ADKeyframeGroup() + prev_ad_keyframes = prev_ad_keyframes.clone() + keyframe = ADKeyframe(start_percent=start_percent, scale_multival=scale_multival, effect_multival=effect_multival, + inherit_missing=inherit_missing, guarantee_steps=guarantee_steps) + prev_ad_keyframes.add(keyframe) + return (prev_ad_keyframes,) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_lora.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_lora.py new file mode 100644 index 0000000000000000000000000000000000000000..5cc3ba3ff88a6fc1ec5c2b2dd72e088587991f7b --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_lora.py @@ -0,0 +1,90 @@ +from pathlib import Path + +import folder_paths +import comfy.utils +import comfy.sd + +from .logger import logger +from .utils_model import get_available_motion_loras, get_motion_lora_path +from .motion_lora import MotionLoraInfo, MotionLoraList + + +class AnimateDiffLoraLoader: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "lora_name": (get_available_motion_loras(),), + "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.001}), + }, + "optional": { + "prev_motion_lora": ("MOTION_LORA",), + } + } + + RETURN_TYPES = ("MOTION_LORA",) + CATEGORY = "Animate Diff 🎭🅐🅓" + FUNCTION = "load_motion_lora" + + def load_motion_lora(self, lora_name: str, strength: float, prev_motion_lora: MotionLoraList=None): + if prev_motion_lora is None: + prev_motion_lora = MotionLoraList() + else: + prev_motion_lora = prev_motion_lora.clone() + # check if motion lora with name exists + lora_path = get_motion_lora_path(lora_name) + if not Path(lora_path).is_file(): + raise FileNotFoundError(f"Motion lora with name '{lora_name}' not found.") + # create motion lora info to be loaded in AnimateDiff Loader + lora_info = MotionLoraInfo(name=lora_name, strength=strength) + prev_motion_lora.add_lora(lora_info) + + return (prev_motion_lora,) + + +class MaskedLoraLoader: + def __init__(self): + self.loaded_lora = None + + @classmethod + def INPUT_TYPES(s): + return {"required": { "model": ("MODEL",), + "clip": ("CLIP", ), + "lora_name": (folder_paths.get_filename_list("loras"), ), + "strength_model": ("FLOAT", {"default": 1.0, "min": -20.0, "max": 20.0, "step": 0.01}), + "strength_clip": ("FLOAT", {"default": 1.0, "min": -20.0, "max": 20.0, "step": 0.01}), + }} + #RETURN_TYPES = () + RETURN_TYPES = ("MODEL", "CLIP") + FUNCTION = "load_lora" + + CATEGORY = "loaders" + + def load_lora(self, model, clip, lora_name, strength_model, strength_clip): + if strength_model == 0 and strength_clip == 0: + return (model, clip) + + lora_path = folder_paths.get_full_path("loras", lora_name) + lora = None + if self.loaded_lora is not None: + if self.loaded_lora[0] == lora_path: + lora = self.loaded_lora[1] + else: + temp = self.loaded_lora + self.loaded_lora = None + del temp + + if lora is None: + lora = comfy.utils.load_torch_file(lora_path, safe_load=True) + self.loaded_lora = (lora_path, lora) + + from pathlib import Path + with open(Path(__file__).parent.parent.parent / "sd_lora_keys.txt", "w") as lfile: + for key in lora: + lfile.write(f"{key}:\t{lora[key].size()}\n") + + #model_lora, clip_lora = comfy.sd.load_lora_for_models(model, clip, lora, strength_model, strength_clip) + #return (model_lora, clip_lora) + return (model, clip) + + \ No newline at end of file diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_multival.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_multival.py new file mode 100644 index 0000000000000000000000000000000000000000..0abfe288804f6ce595e6c0c2842f8ae18efb74f4 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_multival.py @@ -0,0 +1,136 @@ +from collections.abc import Iterable +from typing import Union + +import torch +from torch import Tensor + +from .utils_motion import linear_conversion, normalize_min_max, extend_to_batch_size + + +class ScaleType: + ABSOLUTE = "absolute" + RELATIVE = "relative" + LIST = [ABSOLUTE, RELATIVE] + + +class MultivalDynamicNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "float_val": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001},), + }, + "optional": { + "mask_optional": ("MASK",) + } + } + + RETURN_TYPES = ("MULTIVAL",) + CATEGORY = "Animate Diff 🎭🅐🅓/multival" + FUNCTION = "create_multival" + + def create_multival(self, float_val: Union[float, list[float]]=1.0, mask_optional: Tensor=None): + # first, normalize inputs + # if float_val is iterable, treat as a list and assume inputs are floats + float_is_iterable = False + if isinstance(float_val, Iterable): + float_is_iterable = True + float_val = list(float_val) + # if mask present, make sure float_val list can be applied to list - match lengths + if mask_optional is not None: + if len(float_val) < mask_optional.shape[0]: + # copies last entry enough times to match mask shape + float_val = float_val + float_val[-1]*(mask_optional.shape[0]-len(float_val)) + if mask_optional.shape[0] < len(float_val): + mask_optional = extend_to_batch_size(mask_optional, len(float_val)) + float_val = float_val[:mask_optional.shape[0]] + float_val: Tensor = torch.tensor(float_val).unsqueeze(-1).unsqueeze(-1) + # now that inputs are normalized, figure out what value to actually return + if mask_optional is not None: + mask_optional = mask_optional.clone() + if float_is_iterable: + mask_optional = mask_optional[:] * float_val.to(mask_optional.dtype).to(mask_optional.device) + else: + mask_optional = mask_optional * float_val + return (mask_optional,) + else: + if not float_is_iterable: + return (float_val,) + # create a dummy mask of b,h,w=float_len,1,1 (sigle pixel) + # purpose is for float input to work with mask code, without special cases + float_len = float_val.shape[0] if float_is_iterable else 1 + shape = (float_len,1,1) + mask_optional = torch.ones(shape) + mask_optional = mask_optional[:] * float_val.to(mask_optional.dtype).to(mask_optional.device) + return (mask_optional,) + + +class MultivalScaledMaskNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "min_float_val": ("FLOAT", {"default": 0.0, "min": 0.0, "step": 0.001}), + "max_float_val": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + "mask": ("MASK",), + }, + "optional": { + "scaling": (ScaleType.LIST,), + } + } + + RETURN_TYPES = ("MULTIVAL",) + CATEGORY = "Animate Diff 🎭🅐🅓/multival" + FUNCTION = "create_multival" + + def create_multival(self, min_float_val: float, max_float_val: float, mask: Tensor, scaling: str=ScaleType.ABSOLUTE): + # TODO: allow min_float_val and max_float_val to be list[float] + if isinstance(min_float_val, Iterable): + raise ValueError(f"min_float_val must be type float (no lists allowed here), not {type(min_float_val).__name__}.") + if isinstance(max_float_val, Iterable): + raise ValueError(f"max_float_val must be type float (no lists allowed here), not {type(max_float_val).__name__}.") + + if scaling == ScaleType.ABSOLUTE: + mask = linear_conversion(mask.clone(), new_min=min_float_val, new_max=max_float_val) + elif scaling == ScaleType.RELATIVE: + mask = normalize_min_max(mask.clone(), new_min=min_float_val, new_max=max_float_val) + else: + raise ValueError(f"scaling '{scaling}' not recognized.") + return MultivalDynamicNode.create_multival(self, mask_optional=mask) + + +class MultivalDynamicFloatInputNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "float_val": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.001, "forceInput": True},), + }, + "optional": { + "mask_optional": ("MASK",) + } + } + + RETURN_TYPES = ("MULTIVAL",) + CATEGORY = "Animate Diff 🎭🅐🅓/multival" + FUNCTION = "create_multival" + + def create_multival(self, float_val: Union[float, list[float]]=None, mask_optional: Tensor=None): + return MultivalDynamicNode.create_multival(self, float_val=float_val, mask_optional=mask_optional) + + +class MultivalFloatNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "float_val": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.001},), + }, + } + + RETURN_TYPES = ("MULTIVAL",) + CATEGORY = "Animate Diff 🎭🅐🅓/multival" + FUNCTION = "create_multival" + + def create_multival(self, float_val: Union[float, list[float]]=None): + return MultivalDynamicNode.create_multival(self, float_val=float_val) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_sample.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_sample.py new file mode 100644 index 0000000000000000000000000000000000000000..e43e55acd203b5686a48c8bda3ef9e960e26fd3e --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_sample.py @@ -0,0 +1,255 @@ +from typing import Union +from torch import Tensor + +from .freeinit import FreeInitFilter +from .sample_settings import (FreeInitOptions, IterationOptions, + NoiseLayerAdd, NoiseLayerAddWeighted, NoiseLayerGroup, NoiseLayerReplace, NoiseLayerType, + SeedNoiseGeneration, SampleSettings, CustomCFGKeyframeGroup, CustomCFGKeyframe) +from .utils_model import BIGMIN, BIGMAX, SigmaSchedule + + +class SampleSettingsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "batch_offset": ("INT", {"default": 0, "min": 0, "max": BIGMAX}), + "noise_type": (NoiseLayerType.LIST,), + "seed_gen": (SeedNoiseGeneration.LIST,), + "seed_offset": ("INT", {"default": 0, "min": BIGMIN, "max": BIGMAX}), + }, + "optional": { + "noise_layers": ("NOISE_LAYERS",), + "iteration_opts": ("ITERATION_OPTS",), + "seed_override": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "forceInput": True}), + "adapt_denoise_steps": ("BOOLEAN", {"default": False},), + "custom_cfg": ("CUSTOM_CFG",), + "sigma_schedule": ("SIGMA_SCHEDULE",), + } + } + + RETURN_TYPES = ("SAMPLE_SETTINGS",) + RETURN_NAMES = ("settings",) + CATEGORY = "Animate Diff 🎭🅐🅓" + FUNCTION = "create_settings" + + def create_settings(self, batch_offset: int, noise_type: str, seed_gen: str, seed_offset: int, noise_layers: NoiseLayerGroup=None, + iteration_opts: IterationOptions=None, seed_override: int=None, adapt_denoise_steps=False, + custom_cfg: CustomCFGKeyframeGroup=None, sigma_schedule: SigmaSchedule=None): + sampling_settings = SampleSettings(batch_offset=batch_offset, noise_type=noise_type, seed_gen=seed_gen, seed_offset=seed_offset, noise_layers=noise_layers, + iteration_opts=iteration_opts, seed_override=seed_override, adapt_denoise_steps=adapt_denoise_steps, + custom_cfg=custom_cfg, sigma_schedule=sigma_schedule) + return (sampling_settings,) + + +class NoiseLayerReplaceNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "batch_offset": ("INT", {"default": 0, "min": 0, "max": BIGMAX}), + "noise_type": (NoiseLayerType.LIST,), + "seed_gen_override": (SeedNoiseGeneration.LIST_WITH_OVERRIDE,), + "seed_offset": ("INT", {"default": 0, "min": BIGMIN, "max": BIGMAX}), + }, + "optional": { + "prev_noise_layers": ("NOISE_LAYERS",), + "mask_optional": ("MASK",), + "seed_override": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "forceInput": True}), + } + } + + RETURN_TYPES = ("NOISE_LAYERS",) + CATEGORY = "Animate Diff 🎭🅐🅓/noise layers" + FUNCTION = "create_layers" + + def create_layers(self, batch_offset: int, noise_type: str, seed_gen_override: str, seed_offset: int, + prev_noise_layers: NoiseLayerGroup=None, mask_optional: Tensor=None, seed_override: int=None,): + # prepare prev_noise_layers + if prev_noise_layers is None: + prev_noise_layers = NoiseLayerGroup() + prev_noise_layers = prev_noise_layers.clone() + # create layer + layer = NoiseLayerReplace(noise_type=noise_type, batch_offset=batch_offset, seed_gen_override=seed_gen_override, seed_offset=seed_offset, + seed_override=seed_override, mask=mask_optional) + prev_noise_layers.add_to_start(layer) + return (prev_noise_layers,) + + +class NoiseLayerAddNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "batch_offset": ("INT", {"default": 0, "min": 0, "max": BIGMAX}), + "noise_type": (NoiseLayerType.LIST,), + "seed_gen_override": (SeedNoiseGeneration.LIST_WITH_OVERRIDE,), + "seed_offset": ("INT", {"default": 0, "min": BIGMIN, "max": BIGMAX}), + "noise_weight": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 10.0, "step": 0.001}), + }, + "optional": { + "prev_noise_layers": ("NOISE_LAYERS",), + "mask_optional": ("MASK",), + "seed_override": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "forceInput": True}), + } + } + + RETURN_TYPES = ("NOISE_LAYERS",) + CATEGORY = "Animate Diff 🎭🅐🅓/noise layers" + FUNCTION = "create_layers" + + def create_layers(self, batch_offset: int, noise_type: str, seed_gen_override: str, seed_offset: int, + noise_weight: float, + prev_noise_layers: NoiseLayerGroup=None, mask_optional: Tensor=None, seed_override: int=None,): + # prepare prev_noise_layers + if prev_noise_layers is None: + prev_noise_layers = NoiseLayerGroup() + prev_noise_layers = prev_noise_layers.clone() + # create layer + layer = NoiseLayerAdd(noise_type=noise_type, batch_offset=batch_offset, seed_gen_override=seed_gen_override, seed_offset=seed_offset, + seed_override=seed_override, mask=mask_optional, + noise_weight=noise_weight) + prev_noise_layers.add_to_start(layer) + return (prev_noise_layers,) + + +class NoiseLayerAddWeightedNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "batch_offset": ("INT", {"default": 0, "min": 0, "max": BIGMAX}), + "noise_type": (NoiseLayerType.LIST,), + "seed_gen_override": (SeedNoiseGeneration.LIST_WITH_OVERRIDE,), + "seed_offset": ("INT", {"default": 0, "min": BIGMIN, "max": BIGMAX}), + "noise_weight": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 10.0, "step": 0.001}), + "balance_multiplier": ("FLOAT", {"default": 1.0, "min": 0.0, "step": 0.001}), + }, + "optional": { + "prev_noise_layers": ("NOISE_LAYERS",), + "mask_optional": ("MASK",), + "seed_override": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "forceInput": True}), + } + } + + RETURN_TYPES = ("NOISE_LAYERS",) + CATEGORY = "Animate Diff 🎭🅐🅓/noise layers" + FUNCTION = "create_layers" + + def create_layers(self, batch_offset: int, noise_type: str, seed_gen_override: str, seed_offset: int, + noise_weight: float, balance_multiplier: float, + prev_noise_layers: NoiseLayerGroup=None, mask_optional: Tensor=None, seed_override: int=None,): + # prepare prev_noise_layers + if prev_noise_layers is None: + prev_noise_layers = NoiseLayerGroup() + prev_noise_layers = prev_noise_layers.clone() + # create layer + layer = NoiseLayerAddWeighted(noise_type=noise_type, batch_offset=batch_offset, seed_gen_override=seed_gen_override, seed_offset=seed_offset, + seed_override=seed_override, mask=mask_optional, + noise_weight=noise_weight, balance_multiplier=balance_multiplier) + prev_noise_layers.add_to_start(layer) + return (prev_noise_layers,) + + +class IterationOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "iterations": ("INT", {"default": 1, "min": 1}), + }, + "optional": { + "iter_batch_offset": ("INT", {"default": 0, "min": 0, "max": BIGMAX}), + "iter_seed_offset": ("INT", {"default": 0, "min": BIGMIN, "max": BIGMAX}), + } + } + + RETURN_TYPES = ("ITERATION_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/iteration opts" + FUNCTION = "create_iter_opts" + + def create_iter_opts(self, iterations: int, iter_batch_offset: int=0, iter_seed_offset: int=0): + iter_opts = IterationOptions(iterations=iterations, iter_batch_offset=iter_batch_offset, iter_seed_offset=iter_seed_offset) + return (iter_opts,) + + +class FreeInitOptionsNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "iterations": ("INT", {"default": 2, "min": 1}), + "filter": (FreeInitFilter.LIST,), + "d_s": ("FLOAT", {"default": 0.25, "min": 0.0, "max": 1.0, "step": 0.001}), + "d_t": ("FLOAT", {"default": 0.25, "min": 0.0, "max": 1.0, "step": 0.001}), + "n_butterworth": ("INT", {"default": 4, "min": 1, "max": 100},), + "sigma_step": ("INT", {"default": 999, "min": 1, "max": 999}), + "apply_to_1st_iter": ("BOOLEAN", {"default": False}), + "init_type": (FreeInitOptions.LIST,) + }, + "optional": { + "iter_batch_offset": ("INT", {"default": 0, "min": 0, "max": BIGMAX}), + "iter_seed_offset": ("INT", {"default": 1, "min": BIGMIN, "max": BIGMAX}), + } + } + + RETURN_TYPES = ("ITERATION_OPTS",) + CATEGORY = "Animate Diff 🎭🅐🅓/iteration opts" + FUNCTION = "create_iter_opts" + + def create_iter_opts(self, iterations: int, filter: str, d_s: float, d_t: float, n_butterworth: int, + sigma_step: int, apply_to_1st_iter: bool, init_type: str, + iter_batch_offset: int=0, iter_seed_offset: int=1): + # init_type does nothing for now, not until I add more methods of applying low+high freq noise + iter_opts = FreeInitOptions(iterations=iterations, step=sigma_step, apply_to_1st_iter=apply_to_1st_iter, + filter=filter, d_s=d_s, d_t=d_t, n=n_butterworth, init_type=init_type, + iter_batch_offset=iter_batch_offset, iter_seed_offset=iter_seed_offset) + return (iter_opts,) + + +class CustomCFGNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "cfg_multival": ("MULTIVAL",), + } + } + + RETURN_TYPES = ("CUSTOM_CFG",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings" + FUNCTION = "create_custom_cfg" + + def create_custom_cfg(self, cfg_multival: Union[float, Tensor]): + keyframe = CustomCFGKeyframe(cfg_multival=cfg_multival) + cfg_custom = CustomCFGKeyframeGroup() + cfg_custom.add(keyframe) + return (cfg_custom,) + + +class CustomCFGKeyframeNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "cfg_multival": ("MULTIVAL",), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "guarantee_steps": ("INT", {"default": 1, "min": 0, "max": BIGMAX}), + }, + "optional": { + "prev_custom_cfg": ("CUSTOM_CFG",), + } + } + + RETURN_TYPES = ("CUSTOM_CFG",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings" + FUNCTION = "create_custom_cfg" + + def create_custom_cfg(self, cfg_multival: Union[float, Tensor], start_percent: float=0.0, guarantee_steps: int=1, + prev_custom_cfg: CustomCFGKeyframeGroup=None): + if not prev_custom_cfg: + prev_custom_cfg = CustomCFGKeyframeGroup() + prev_custom_cfg = prev_custom_cfg.clone() + keyframe = CustomCFGKeyframe(cfg_multival=cfg_multival, start_percent=start_percent, guarantee_steps=guarantee_steps) + prev_custom_cfg.add(keyframe) + return (prev_custom_cfg,) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_sigma_schedule.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_sigma_schedule.py new file mode 100644 index 0000000000000000000000000000000000000000..4431c7a4cb9e20a901425ada66636c011b2febbb --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_sigma_schedule.py @@ -0,0 +1,141 @@ +import torch + +from .utils_model import BetaSchedules, SigmaSchedule, ModelSamplingType, ModelSamplingConfig, InterpolationMethod + + +def validate_sigma_schedule_compatibility(schedule_A: SigmaSchedule, schedule_B: SigmaSchedule, + name_a: str="sigma_schedule_A", name_b: str="sigma_schedule_B"): + if schedule_A.total_sigmas() != schedule_B.total_sigmas(): + raise Exception(f"Weighted Average cannot be taken of Sigma Schedules that do not have the same amount of sigmas; " + + f"{name_a} has {schedule_A.total_sigmas()} sigmas (lcm={schedule_A.is_lcm()}), " + + f"{name_b} has {schedule_B.total_sigmas()} sigmas (lcm={schedule_B.is_lcm()}).") + + +class SigmaScheduleNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "beta_schedule": (BetaSchedules.ALIAS_ACTIVE_LIST,), + } + } + + RETURN_TYPES = ("SIGMA_SCHEDULE",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings/sigma schedule" + FUNCTION = "get_sigma_schedule" + + def get_sigma_schedule(self, beta_schedule: str): + model_type = ModelSamplingType.from_alias(ModelSamplingType.EPS) + new_model_sampling = BetaSchedules._to_model_sampling(alias=beta_schedule, + model_type=model_type) + return (SigmaSchedule(model_sampling=new_model_sampling, model_type=model_type),) + + +class RawSigmaScheduleNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "raw_beta_schedule": (BetaSchedules.RAW_BETA_SCHEDULE_LIST,), + "linear_start": ("FLOAT", {"default": 0.00085, "min": 0.0, "max": 1.0, "step": 0.000001}), + "linear_end": ("FLOAT", {"default": 0.012, "min": 0.0, "max": 1.0, "step": 0.000001}), + #"cosine_s": ("FLOAT", {"default": 8e-3, "min": 0.0, "max": 1.0, "step": 0.000001}), + "sampling": (ModelSamplingType._FULL_LIST,), + "lcm_original_timesteps": ("INT", {"default": 50, "min": 1, "max": 1000}), + "lcm_zsnr": ("BOOLEAN", {"default": False}), + } + } + + RETURN_TYPES = ("SIGMA_SCHEDULE",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings/sigma schedule" + FUNCTION = "get_sigma_schedule" + + def get_sigma_schedule(self, raw_beta_schedule: str, linear_start: float, linear_end: float,# cosine_s: float, + sampling: str, lcm_original_timesteps: int, lcm_zsnr: bool): + new_config = ModelSamplingConfig(beta_schedule=raw_beta_schedule, linear_start=linear_start, linear_end=linear_end) + if sampling != ModelSamplingType.LCM: + lcm_original_timesteps=None + lcm_zsnr=False + model_type = ModelSamplingType.from_alias(sampling) + new_model_sampling = BetaSchedules._to_model_sampling(alias=BetaSchedules.AUTOSELECT, model_type=model_type, config_override=new_config, original_timesteps=lcm_original_timesteps) + if lcm_zsnr: + SigmaSchedule.apply_zsnr(new_model_sampling=new_model_sampling) + return (SigmaSchedule(model_sampling=new_model_sampling, model_type=model_type),) + + +class WeightedAverageSigmaScheduleNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "schedule_A": ("SIGMA_SCHEDULE",), + "schedule_B": ("SIGMA_SCHEDULE",), + "weight_A": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.001}), + } + } + + RETURN_TYPES = ("SIGMA_SCHEDULE",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings/sigma schedule" + FUNCTION = "get_sigma_schedule" + + def get_sigma_schedule(self, schedule_A: SigmaSchedule, schedule_B: SigmaSchedule, weight_A: float): + validate_sigma_schedule_compatibility(schedule_A, schedule_B) + new_sigmas = schedule_A.model_sampling.sigmas * weight_A + schedule_B.model_sampling.sigmas * (1-weight_A) + combo_schedule = schedule_A.clone() + combo_schedule.model_sampling.set_sigmas(new_sigmas) + return (combo_schedule,) + + +class InterpolatedWeightedAverageSigmaScheduleNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "schedule_A": ("SIGMA_SCHEDULE",), + "schedule_B": ("SIGMA_SCHEDULE",), + "weight_A_Start": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.001}), + "weight_A_End": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.001}), + "interpolation": (InterpolationMethod._LIST,), + } + } + + RETURN_TYPES = ("SIGMA_SCHEDULE",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings/sigma schedule" + FUNCTION = "get_sigma_schedule" + + def get_sigma_schedule(self, schedule_A: SigmaSchedule, schedule_B: SigmaSchedule, + weight_A_Start: float, weight_A_End: float, interpolation: str): + validate_sigma_schedule_compatibility(schedule_A, schedule_B) + # get reverse weights, since sigmas are currently reversed + weights = InterpolationMethod.get_weights(num_from=weight_A_Start, num_to=weight_A_End, + length=schedule_A.total_sigmas(), method=interpolation, reverse=True) + weights = weights.to(schedule_A.model_sampling.sigmas.dtype).to(schedule_A.model_sampling.sigmas.device) + new_sigmas = schedule_A.model_sampling.sigmas * weights + schedule_B.model_sampling.sigmas * (1.0-weights) + combo_schedule = schedule_A.clone() + combo_schedule.model_sampling.set_sigmas(new_sigmas) + return (combo_schedule,) + + +class SplitAndCombineSigmaScheduleNode: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "schedule_Start": ("SIGMA_SCHEDULE",), + "schedule_End": ("SIGMA_SCHEDULE",), + "idx_split_percent": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.001}) + } + } + + RETURN_TYPES = ("SIGMA_SCHEDULE",) + CATEGORY = "Animate Diff 🎭🅐🅓/sample settings/sigma schedule" + FUNCTION = "get_sigma_schedule" + + def get_sigma_schedule(self, schedule_Start: SigmaSchedule, schedule_End: SigmaSchedule, idx_split_percent: float): + validate_sigma_schedule_compatibility(schedule_Start, schedule_End) + # first, calculate index to act as split; get diff from 1.0 since sigmas are flipped at this stage + idx = int((1.0-idx_split_percent) * schedule_Start.total_sigmas()) + new_sigmas = torch.cat([schedule_End.model_sampling.sigmas[:idx], schedule_Start.model_sampling.sigmas[idx:]], dim=0) + new_schedule = schedule_Start.clone() + new_schedule.model_sampling.set_sigmas(new_sigmas) + return (new_schedule,) diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sample_settings.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sample_settings.py new file mode 100644 index 0000000000000000000000000000000000000000..124b67bfc2ea4fbbceeac59be0bc4f3440951eeb --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sample_settings.py @@ -0,0 +1,555 @@ +from collections.abc import Iterable +from typing import Union +import torch +from torch import Tensor + +import comfy.sample +import comfy.samplers +from comfy.model_patcher import ModelPatcher +from comfy.model_base import BaseModel + +from . import freeinit +from .context import ContextOptions, ContextOptionsGroup +from .utils_model import SigmaSchedule +from .utils_motion import extend_to_batch_size, get_sorted_list_via_attr, prepare_mask_batch +from .logger import logger + + +def prepare_mask_ad(noise_mask, shape, device): + """ensures noise mask is of proper dimensions""" + noise_mask = torch.nn.functional.interpolate(noise_mask.reshape((-1, 1, noise_mask.shape[-2], noise_mask.shape[-1])), size=(shape[2], shape[3]), mode="bilinear") + #noise_mask = noise_mask.round() + noise_mask = torch.cat([noise_mask] * shape[1], dim=1) + noise_mask = comfy.utils.repeat_to_batch_size(noise_mask, shape[0]) + noise_mask = noise_mask.to(device) + return noise_mask + + +class NoiseLayerType: + DEFAULT = "default" + CONSTANT = "constant" + EMPTY = "empty" + REPEATED_CONTEXT = "repeated_context" + FREENOISE = "FreeNoise" + + LIST = [DEFAULT, CONSTANT, EMPTY, REPEATED_CONTEXT, FREENOISE] + + +class NoiseApplication: + ADD = "add" + ADD_WEIGHTED = "add_weighted" + REPLACE = "replace" + + LIST = [ADD, ADD_WEIGHTED, REPLACE] + + +class NoiseNormalize: + DISABLE = "disable" + NORMAL = "normal" + + LIST = [DISABLE, NORMAL] + + +class SampleSettings: + def __init__(self, batch_offset: int=0, noise_type: str=None, seed_gen: str=None, seed_offset: int=0, noise_layers: 'NoiseLayerGroup'=None, + iteration_opts=None, seed_override:int=None, negative_cond_flipflop=False, adapt_denoise_steps: bool=False, + custom_cfg: 'CustomCFGKeyframeGroup'=None, sigma_schedule: SigmaSchedule=None): + self.batch_offset = batch_offset + self.noise_type = noise_type if noise_type is not None else NoiseLayerType.DEFAULT + self.seed_gen = seed_gen if seed_gen is not None else SeedNoiseGeneration.COMFY + self.noise_layers = noise_layers if noise_layers else NoiseLayerGroup() + self.iteration_opts = iteration_opts if iteration_opts else IterationOptions() + self.seed_offset = seed_offset + self.seed_override = seed_override + self.negative_cond_flipflop = negative_cond_flipflop + self.adapt_denoise_steps = adapt_denoise_steps + self.custom_cfg = custom_cfg.clone() if custom_cfg else custom_cfg + self.sigma_schedule = sigma_schedule + + def prepare_noise(self, seed: int, latents: Tensor, noise: Tensor, extra_seed_offset=0, extra_args:dict={}, force_create_noise=True): + if self.seed_override is not None: + seed = self.seed_override + # if seed is iterable, attempt to do per-latent noises + if isinstance(seed, Iterable): + noise = SeedNoiseGeneration.create_noise_individual_seeds(seeds=seed, latents=latents, seed_offset=self.seed_offset+extra_seed_offset, extra_args=extra_args) + seed = seed[0]+self.seed_offset + else: + seed += self.seed_offset + # replace initial noise if not batch_offset 0 or Comfy seed_gen or not NoiseType default + if self.batch_offset != 0 or self.seed_offset != 0 or self.noise_type != NoiseLayerType.DEFAULT or self.seed_gen != SeedNoiseGeneration.COMFY or force_create_noise: + noise = SeedNoiseGeneration.create_noise(seed=seed+extra_seed_offset, latents=latents, existing_seed_gen=self.seed_gen, seed_gen=self.seed_gen, + noise_type=self.noise_type, batch_offset=self.batch_offset, extra_args=extra_args) + # apply noise layers + for noise_layer in self.noise_layers.layers: + # first, generate new noise matching seed gen override + layer_noise = noise_layer.create_layer_noise(existing_seed_gen=self.seed_gen, seed=seed, latents=latents, + extra_seed_offset=extra_seed_offset, extra_args=extra_args) + # next, get noise after applying layer + noise = noise_layer.apply_layer_noise(new_noise=layer_noise, old_noise=noise) + # noise prepared now + return noise + + def pre_run(self, model: ModelPatcher): + if self.custom_cfg is not None: + self.custom_cfg.reset() + + def cleanup(self): + if self.custom_cfg is not None: + self.custom_cfg.reset() + + def clone(self): + return SampleSettings(batch_offset=self.batch_offset, noise_type=self.noise_type, seed_gen=self.seed_gen, seed_offset=self.seed_offset, + noise_layers=self.noise_layers.clone(), iteration_opts=self.iteration_opts, seed_override=self.seed_override, + negative_cond_flipflop=self.negative_cond_flipflop, adapt_denoise_steps=self.adapt_denoise_steps, custom_cfg=self.custom_cfg, sigma_schedule=self.sigma_schedule) + + +class NoiseLayer: + def __init__(self, noise_type: str, batch_offset: int, seed_gen_override: str, seed_offset: int, seed_override: int=None, mask: Tensor=None): + self.application: str = NoiseApplication.REPLACE + self.noise_type = noise_type + self.batch_offset = batch_offset + self.seed_gen_override = seed_gen_override + self.seed_offset = seed_offset + self.seed_override = seed_override + self.mask = mask + + def create_layer_noise(self, existing_seed_gen: str, seed: int, latents: Tensor, extra_seed_offset=0, extra_args:dict={}) -> Tensor: + if self.seed_override is not None: + seed = self.seed_override + # if seed is iterable, attempt to do per-latent noises + if isinstance(seed, Iterable): + return SeedNoiseGeneration.create_noise_individual_seeds(seeds=seed, latents=latents, seed_offset=self.seed_offset+extra_seed_offset, extra_args=extra_args) + seed += self.seed_offset + extra_seed_offset + return SeedNoiseGeneration.create_noise(seed=seed, latents=latents, existing_seed_gen=existing_seed_gen, seed_gen=self.seed_gen_override, + noise_type=self.noise_type, batch_offset=self.batch_offset, extra_args=extra_args) + + def apply_layer_noise(self, new_noise: Tensor, old_noise: Tensor) -> Tensor: + return old_noise + + def get_noise_mask(self, noise: Tensor) -> Tensor: + if self.mask is None: + return 1 + noise_mask = self.mask.reshape((-1, 1, self.mask.shape[-2], self.mask.shape[-1])) + return prepare_mask_ad(noise_mask, noise.shape, noise.device) + + +class NoiseLayerReplace(NoiseLayer): + def __init__(self, noise_type: str, batch_offset: int, seed_gen_override: str, seed_offset: int, seed_override: int=None, mask: Tensor=None): + super().__init__(noise_type, batch_offset, seed_gen_override, seed_offset, seed_override, mask) + self.application = NoiseApplication.REPLACE + + def apply_layer_noise(self, new_noise: Tensor, old_noise: Tensor) -> Tensor: + noise_mask = self.get_noise_mask(old_noise) + return (1-noise_mask)*old_noise + noise_mask*new_noise + + +class NoiseLayerAdd(NoiseLayer): + def __init__(self, noise_type: str, batch_offset: int, seed_gen_override: str, seed_offset: int, seed_override: int=None, mask: Tensor=None, + noise_weight=1.0): + super().__init__(noise_type, batch_offset, seed_gen_override, seed_offset, seed_override, mask) + self.noise_weight = noise_weight + self.application = NoiseApplication.ADD + + def apply_layer_noise(self, new_noise: Tensor, old_noise: Tensor) -> Tensor: + noise_mask = self.get_noise_mask(old_noise) + return (1-noise_mask)*old_noise + noise_mask*(old_noise + new_noise * self.noise_weight) + + +class NoiseLayerAddWeighted(NoiseLayerAdd): + def __init__(self, noise_type: str, batch_offset: int, seed_gen_override: str, seed_offset: int, seed_override: int=None, mask: Tensor=None, + noise_weight=1.0, balance_multiplier=1.0): + super().__init__(noise_type, batch_offset, seed_gen_override, seed_offset, seed_override, mask, noise_weight) + self.balance_multiplier = balance_multiplier + self.application = NoiseApplication.ADD_WEIGHTED + + def apply_layer_noise(self, new_noise: Tensor, old_noise: Tensor) -> Tensor: + noise_mask = self.get_noise_mask(old_noise) + return (1-noise_mask)*old_noise + noise_mask*(old_noise * (1.0-(self.noise_weight*self.balance_multiplier)) + new_noise * self.noise_weight) + + +class NoiseLayerGroup: + def __init__(self): + self.layers: list[NoiseLayer] = [] + + def add(self, layer: NoiseLayer) -> None: + # add to the end of list + self.layers.append(layer) + + def add_to_start(self, layer: NoiseLayer) -> None: + # add to the beginning of list + self.layers.insert(0, layer) + + def __getitem__(self, index) -> NoiseLayer: + return self.layers[index] + + def is_empty(self) -> bool: + return len(self.layers) == 0 + + def clone(self) -> 'NoiseLayerGroup': + cloned = NoiseLayerGroup() + for layer in self.layers: + cloned.add(layer) + return cloned + +class SeedNoiseGeneration: + COMFY = "comfy" + AUTO1111 = "auto1111" + AUTO1111GPU = "auto1111 [gpu]" # TODO: implement this + USE_EXISTING = "use existing" + + LIST = [COMFY, AUTO1111] + LIST_WITH_OVERRIDE = [USE_EXISTING, COMFY, AUTO1111] + + @classmethod + def create_noise(cls, seed: int, latents: Tensor, existing_seed_gen: str=COMFY, seed_gen: str=USE_EXISTING, noise_type: str=NoiseLayerType.DEFAULT, batch_offset: int=0, extra_args: dict={}): + # determine if should use existing type + if seed_gen == cls.USE_EXISTING: + seed_gen = existing_seed_gen + if seed_gen == cls.COMFY: + return cls.create_noise_comfy(seed, latents, noise_type, batch_offset, extra_args) + elif seed_gen in [cls.AUTO1111, cls.AUTO1111GPU]: + return cls.create_noise_auto1111(seed, latents, noise_type, batch_offset, extra_args) + raise ValueError(f"Noise seed_gen {seed_gen} is not recognized.") + + @staticmethod + def create_noise_comfy(seed: int, latents: Tensor, noise_type: str=NoiseLayerType.DEFAULT, batch_offset: int=0, extra_args: dict={}): + common_noise = SeedNoiseGeneration._create_common_noise(seed, latents, noise_type, batch_offset, extra_args) + if common_noise is not None: + return common_noise + if noise_type == NoiseLayerType.CONSTANT: + generator = torch.manual_seed(seed) + length = latents.shape[0] + single_shape = (1 + batch_offset, latents.shape[1], latents.shape[2], latents.shape[3]) + single_noise = torch.randn(single_shape, dtype=latents.dtype, layout=latents.layout, generator=generator, device="cpu") + return torch.cat([single_noise[batch_offset:]] * length, dim=0) + # comfy creates noise with a single seed for the entire shape of the latents batched tensor + generator = torch.manual_seed(seed) + offset_shape = (latents.shape[0] + batch_offset, latents.shape[1], latents.shape[2], latents.shape[3]) + final_noise = torch.randn(offset_shape, dtype=latents.dtype, layout=latents.layout, generator=generator, device="cpu") + final_noise = final_noise[batch_offset:] + # convert to derivative noise type, if needed + derivative_noise = SeedNoiseGeneration._create_derivative_noise(final_noise, noise_type=noise_type, seed=seed, extra_args=extra_args) + if derivative_noise is not None: + return derivative_noise + return final_noise + + @staticmethod + def create_noise_auto1111(seed: int, latents: Tensor, noise_type: str=NoiseLayerType.DEFAULT, batch_offset: int=0, extra_args: dict={}): + common_noise = SeedNoiseGeneration._create_common_noise(seed, latents, noise_type, batch_offset, extra_args) + if common_noise is not None: + return common_noise + if noise_type == NoiseLayerType.CONSTANT: + generator = torch.manual_seed(seed+batch_offset) + length = latents.shape[0] + single_shape = (1, latents.shape[1], latents.shape[2], latents.shape[3]) + single_noise = torch.randn(single_shape, dtype=latents.dtype, layout=latents.layout, generator=generator, device="cpu") + return torch.cat([single_noise] * length, dim=0) + # auto1111 applies growing seeds for a batch + length = latents.shape[0] + single_shape = (1, latents.shape[1], latents.shape[2], latents.shape[3]) + all_noises = [] + # i starts at 0 + for i in range(length): + generator = torch.manual_seed(seed+i+batch_offset) + all_noises.append(torch.randn(single_shape, dtype=latents.dtype, layout=latents.layout, generator=generator, device="cpu")) + final_noise = torch.cat(all_noises, dim=0) + # convert to derivative noise type, if needed + derivative_noise = SeedNoiseGeneration._create_derivative_noise(final_noise, noise_type=noise_type, seed=seed, extra_args=extra_args) + if derivative_noise is not None: + return derivative_noise + return final_noise + + @staticmethod + def create_noise_individual_seeds(seeds: list[int], latents: Tensor, seed_offset: int=0, extra_args: dict={}): + length = latents.shape[0] + if len(seeds) < length: + raise ValueError(f"{len(seeds)} seeds in seed_override were provided, but at least {length} are required to work with the current latents.") + seeds = seeds[:length] + single_shape = (1, latents.shape[1], latents.shape[2], latents.shape[3]) + all_noises = [] + for seed in seeds: + generator = torch.manual_seed(seed+seed_offset) + all_noises.append(torch.randn(single_shape, dtype=latents.dtype, layout=latents.layout, generator=generator, device="cpu")) + return torch.cat(all_noises, dim=0) + + @staticmethod + def _create_common_noise(seed: int, latents: Tensor, noise_type: str=NoiseLayerType.DEFAULT, batch_offset: int=0, extra_args: dict={}): + if noise_type == NoiseLayerType.EMPTY: + return torch.zeros_like(latents) + return None + + @staticmethod + def _create_derivative_noise(noise: Tensor, noise_type: str, seed: int, extra_args: dict): + derivative_func = DERIVATIVE_NOISE_FUNC_MAP.get(noise_type, None) + if derivative_func is None: + return None + return derivative_func(noise=noise, seed=seed, extra_args=extra_args) + + @staticmethod + def _convert_to_repeated_context(noise: Tensor, extra_args: dict, **kwargs): + # if no context_length, return unmodified noise + opts: ContextOptionsGroup = extra_args["context_options"] + context_length: int = opts.context_length if not opts.view_options else opts.view_options.context_length + if context_length is None: + return noise + length = noise.shape[0] + noise = noise[:context_length] + cat_count = (length // context_length) + 1 + return torch.cat([noise] * cat_count, dim=0)[:length] + + @staticmethod + def _convert_to_freenoise(noise: Tensor, seed: int, extra_args: dict, **kwargs): + # if no context_length, return unmodified noise + opts: ContextOptionsGroup = extra_args["context_options"] + context_length: int = opts.context_length if not opts.view_options else opts.view_options.context_length + context_overlap: int = opts.context_overlap if not opts.view_options else opts.view_options.context_overlap + video_length: int = noise.shape[0] + if context_length is None: + return noise + delta = context_length - context_overlap + generator = torch.manual_seed(seed) + + for start_idx in range(0, video_length-context_length, delta): + # start_idx corresponds to the beginning of a context window + # goal: place shuffled in the delta region right after the end of the context window + # if space after context window is not enough to place the noise, adjust and finish + place_idx = start_idx + context_length + # if place_idx is outside the valid indexes, we are already finished + if place_idx >= video_length: + break + end_idx = place_idx - 1 + # if there is not enough room to copy delta amount of indexes, copy limited amount and finish + if end_idx + delta >= video_length: + final_delta = video_length - place_idx + # generate list of indexes in final delta region + list_idx = torch.Tensor(list(range(start_idx,start_idx+final_delta))).to(torch.long) + # shuffle list + list_idx = list_idx[torch.randperm(final_delta, generator=generator)] + # apply shuffled indexes + noise[place_idx:place_idx+final_delta] = noise[list_idx] + break + # otherwise, do normal behavior + # generate list of indexes in delta region + list_idx = torch.Tensor(list(range(start_idx,start_idx+delta))).to(torch.long) + # shuffle list + list_idx = list_idx[torch.randperm(delta, generator=generator)] + # apply shuffled indexes + noise[place_idx:place_idx+delta] = noise[list_idx] + return noise + + +DERIVATIVE_NOISE_FUNC_MAP = { + NoiseLayerType.REPEATED_CONTEXT: SeedNoiseGeneration._convert_to_repeated_context, + NoiseLayerType.FREENOISE: SeedNoiseGeneration._convert_to_freenoise, + } + + +class IterationOptions: + SAMPLER = "sampler" + + def __init__(self, iterations: int=1, cache_init_noise=False, cache_init_latents=False, + iter_batch_offset: int=0, iter_seed_offset: int=0): + self.iterations = iterations + self.cache_init_noise = cache_init_noise + self.cache_init_latents = cache_init_latents + self.iter_batch_offset = iter_batch_offset + self.iter_seed_offset = iter_seed_offset + self.need_sampler = False + + def get_sigma(self, model: ModelPatcher, step: int): + model_sampling = model.model.model_sampling + if "model_sampling" in model.object_patches: + model_sampling = model.object_patches["model_sampling"] + return model_sampling.sigmas[step] + + def initialize(self, latents: Tensor): + pass + + def preprocess_latents(self, curr_i: int, model: ModelPatcher, latents: Tensor, noise: Tensor, + seed: int, sample_settings: SampleSettings, noise_extra_args: dict, **kwargs): + if curr_i == 0 or (self.iter_batch_offset == 0 and self.iter_seed_offset == 0): + return latents, noise + temp_sample_settings = sample_settings.clone() + temp_sample_settings.batch_offset += self.iter_batch_offset * curr_i + temp_sample_settings.seed_offset += self.iter_seed_offset * curr_i + return latents, temp_sample_settings.prepare_noise(seed=seed, latents=latents, noise=None, + extra_args=noise_extra_args, force_create_noise=True) + + +class FreeInitOptions(IterationOptions): + FREEINIT_SAMPLER = "FreeInit [sampler sigma]" + FREEINIT_MODEL = "FreeInit [model sigma]" + DINKINIT_V1 = "DinkInit_v1" + + LIST = [FREEINIT_SAMPLER, FREEINIT_MODEL, DINKINIT_V1] + + def __init__(self, iterations: int, step: int=999, apply_to_1st_iter: bool=False, + filter=freeinit.FreeInitFilter.GAUSSIAN, d_s=0.25, d_t=0.25, n=4, init_type=FREEINIT_SAMPLER, + iter_batch_offset: int=0, iter_seed_offset: int=1): + super().__init__(iterations=iterations, cache_init_noise=True, cache_init_latents=True, + iter_batch_offset=iter_batch_offset, iter_seed_offset=iter_seed_offset) + self.apply_to_1st_iter = apply_to_1st_iter + self.step = step + self.filter = filter + self.d_s = d_s + self.d_t = d_t + self.n = n + self.freq_filter = None + self.freq_filter2 = None + self.need_sampler = True if init_type in [self.FREEINIT_SAMPLER] else False + self.init_type = init_type + + def initialize(self, latents: Tensor): + self.freq_filter = freeinit.get_freq_filter(latents.shape, device=latents.device, filter_type=self.filter, + n=self.n, d_s=self.d_s, d_t=self.d_t) + + def preprocess_latents(self, curr_i: int, model: ModelPatcher, latents: Tensor, noise: Tensor, cached_latents: Tensor, cached_noise: Tensor, + seed:int, sample_settings: SampleSettings, noise_extra_args: dict, sampler: comfy.samplers.KSampler=None, **kwargs): + # if first iter and should not apply, do nothing + if curr_i == 0 and not self.apply_to_1st_iter: + return latents, noise + # otherwise, do FreeInit stuff + if self.init_type in [self.FREEINIT_SAMPLER, self.FREEINIT_MODEL]: + # NOTE: This should be very close (if not exactly) to how FreeInit is intended to initialize noise the latents. + # The trick is that FreeInit is dependent on the behavior of diffuser's DDIMScheduler.add_noise function. + # The typical noising method of latents + noise * sigma will NOT work. + # 1. apply initial noise with appropriate step sigma, normalized against scale_factor + if sampler is not None: + sigma = sampler.sigmas[999-self.step].to(latents.device) / (model.model.latent_format.scale_factor) + else: + sigma = self.get_sigma(model, self.step-1000).to(latents.device) / (model.model.latent_format.scale_factor) + alpha_cumprod = 1 / ((sigma * sigma) + 1) + sqrt_alpha_prod = alpha_cumprod ** 0.5 + sqrt_one_minus_alpha_prod = (1 - alpha_cumprod) ** 0.5 + noised_latents = latents * sqrt_alpha_prod + noise * sqrt_one_minus_alpha_prod + # 2. create random noise z_rand for high frequency + temp_sample_settings = sample_settings.clone() + temp_sample_settings.batch_offset += self.iter_batch_offset * curr_i + temp_sample_settings.seed_offset += self.iter_seed_offset * curr_i + z_rand = temp_sample_settings.prepare_noise(seed=seed, latents=latents, noise=None, + extra_args=noise_extra_args, force_create_noise=True) + # 3. noise reinitialization - combines low freq. noise from noised_latents and high freq. noise from z_rand + noised_latents = freeinit.freq_mix_3d(x=noised_latents, noise=z_rand.to(dtype=latents.dtype, device=latents.device), LPF=self.freq_filter) + return cached_latents, noised_latents + elif self.init_type == self.DINKINIT_V1: + # NOTE: This was my first attempt at implementing FreeInit; it sorta works due to my alpha_cumprod shenanigans, + # but completely by accident. + # 1. apply initial noise with appropriate step sigma + sigma = self.get_sigma(model, self.step-1000).to(latents.device) + alpha_cumprod = 1 / ((sigma * sigma) + 1) #1 / ((sigma * sigma)) # 1 / ((sigma * sigma) + 1) + noised_latents = (latents + (cached_noise * sigma)) * alpha_cumprod + # 2. create random noise z_rand for high frequency + temp_sample_settings = sample_settings.clone() + temp_sample_settings.batch_offset += self.iter_batch_offset * curr_i + temp_sample_settings.seed_offset += self.iter_seed_offset * curr_i + z_rand = temp_sample_settings.prepare_noise(seed=seed, latents=latents, noise=None, + extra_args=noise_extra_args, force_create_noise=True) + ####z_rand = torch.randn_like(latents, dtype=latents.dtype, device=latents.device) + # 3. noise reinitialization - combines low freq. noise from noised_latents and high freq. noise from z_rand + noised_latents = freeinit.freq_mix_3d(x=noised_latents, noise=z_rand.to(dtype=latents.dtype, device=latents.device), LPF=self.freq_filter) + return cached_latents, noised_latents + else: + raise ValueError(f"FreeInit init_type '{self.init_type}' is not recognized.") + + +class CustomCFGKeyframe: + def __init__(self, cfg_multival: Union[float, Tensor], start_percent=0.0, guarantee_steps=1): + self.cfg_multival = cfg_multival + # scheduling + self.start_percent = float(start_percent) + self.start_t = 999999999.9 + self.guarantee_steps = guarantee_steps + + def clone(self): + c = CustomCFGKeyframe(cfg_multival=self.cfg_multival, + start_percent=self.start_percent, guarantee_steps=self.guarantee_steps) + c.start_t = self.start_t + return c + + +class CustomCFGKeyframeGroup: + def __init__(self): + self.keyframes: list[CustomCFGKeyframe] = [] + self._current_keyframe: CustomCFGKeyframe = None + self._current_used_steps: int = 0 + self._current_index: int = 0 + + def reset(self): + self._current_keyframe = None + self._current_used_steps = 0 + self._current_index = 0 + self._set_first_as_current() + + def add(self, keyframe: CustomCFGKeyframe): + # add to end of list, then sort + self.keyframes.append(keyframe) + self.keyframes = get_sorted_list_via_attr(self.keyframes, "start_percent") + self._set_first_as_current() + + def _set_first_as_current(self): + if len(self.keyframes) > 0: + self._current_keyframe = self.keyframes[0] + else: + self._current_keyframe = None + + def has_index(self, index: int) -> int: + return index >=0 and index < len(self.keyframes) + + def is_empty(self) -> bool: + return len(self.keyframes) == 0 + + def clone(self): + cloned = CustomCFGKeyframeGroup() + for keyframe in self.keyframes: + cloned.keyframes.append(keyframe) + cloned._set_first_as_current() + return cloned + + def initialize_timesteps(self, model: BaseModel): + for keyframe in self.keyframes: + keyframe.start_t = model.model_sampling.percent_to_sigma(keyframe.start_percent) + + def prepare_current_keyframe(self, t: Tensor): + curr_t: float = t[0] + prev_index = self._current_index + # if met guaranteed steps, look for next keyframe in case need to switch + if self._current_used_steps >= self._current_keyframe.guarantee_steps: + # if has next index, loop through and see if need t oswitch + if self.has_index(self._current_index+1): + for i in range(self._current_index+1, len(self.keyframes)): + eval_c = self.keyframes[i] + # check if start_t is greater or equal to curr_t + # NOTE: t is in terms of sigmas, not percent, so bigger number = earlier step in sampling + if eval_c.start_t >= curr_t: + self._current_index = i + self._current_keyframe = eval_c + self._current_used_steps = 0 + # if guarantee_steps greater than zero, stop searching for other keyframes + if self._current_keyframe.guarantee_steps > 0: + break + # if eval_c is outside the percent range, stop looking further + else: break + # update steps current context is used + self._current_used_steps += 1 + + def patch_model(self, model: ModelPatcher) -> ModelPatcher: + def evolved_custom_cfg(args): + cond: Tensor = args["cond"] + uncond: Tensor = args["uncond"] + # cond scale is based purely off of CustomCFG - cond_scale input in sampler is ignored! + cond_scale = self.cfg_multival + if isinstance(cond_scale, Tensor): + cond_scale = prepare_mask_batch(cond_scale.to(cond.dtype).to(cond.device), cond.shape) + cond_scale = extend_to_batch_size(cond_scale, cond.shape[0]) + return uncond + (cond - uncond) * cond_scale + + model = model.clone() + model.set_model_sampler_cfg_function(evolved_custom_cfg) + return model + + # properties shadow those of CustomCFGKeyframe + @property + def cfg_multival(self): + if self._current_keyframe != None: + return self._current_keyframe.cfg_multival + return None diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py new file mode 100644 index 0000000000000000000000000000000000000000..eee218b4e3826be52eb1e5199d42e53f2618e2a1 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py @@ -0,0 +1,528 @@ +from typing import Callable + +import math +import torch +from torch import Tensor +from torch.nn.functional import group_norm +from einops import rearrange + +import comfy.ldm.modules.attention as attention +from comfy.ldm.modules.diffusionmodules import openaimodel +import comfy.model_management as model_management +import comfy.samplers +import comfy.sample +import comfy.utils +from comfy.controlnet import ControlBase +import comfy.ops + +from .context import ContextFuseMethod, ContextSchedules, get_context_weights, get_context_windows +from .sample_settings import IterationOptions, SampleSettings, SeedNoiseGeneration, prepare_mask_ad +from .utils_model import ModelTypeSD, wrap_function_to_inject_xformers_bug_info +from .model_injection import InjectionParams, ModelPatcherAndInjector, MotionModelGroup, MotionModelPatcher +from .motion_module_ad import AnimateDiffFormat, AnimateDiffInfo, AnimateDiffVersion, VanillaTemporalModule +from .logger import logger + + +################################################################################## +###################################################################### +# Global variable to use to more conveniently hack variable access into samplers +class AnimateDiffHelper_GlobalState: + def __init__(self): + self.motion_models: MotionModelGroup = None + self.params: InjectionParams = None + self.sample_settings: SampleSettings = None + self.reset() + + def initialize(self, model): + # this function is to be run in sampling func + if not self.initialized: + self.initialized = True + if self.motion_models is not None: + self.motion_models.initialize_timesteps(model) + if self.params.context_options is not None: + self.params.context_options.initialize_timesteps(model) + if self.sample_settings.custom_cfg is not None: + self.sample_settings.custom_cfg.initialize_timesteps(model) + + def reset(self): + self.initialized = False + self.start_step: int = 0 + self.last_step: int = 0 + self.current_step: int = 0 + self.total_steps: int = 0 + if self.motion_models is not None: + del self.motion_models + self.motion_models = None + if self.params is not None: + del self.params + self.params = None + if self.sample_settings is not None: + del self.sample_settings + self.sample_settings = None + + def update_with_inject_params(self, params: InjectionParams): + self.params = params + + def is_using_sliding_context(self): + return self.params is not None and self.params.is_using_sliding_context() + + def create_exposed_params(self): + # This dict will be exposed to be used by other extensions + # DO NOT change any of the key names + # or I will find you 👁.👁 + return { + "full_length": self.params.full_length, + "context_length": self.params.context_options.context_length, + "sub_idxs": self.params.sub_idxs, + } + +ADGS = AnimateDiffHelper_GlobalState() +###################################################################### +################################################################################## + + +################################################################################## +#### Code Injection ################################################## + +# refer to forward_timestep_embed in comfy/ldm/modules/diffusionmodules/openaimodel.py +def forward_timestep_embed_factory() -> Callable: + def forward_timestep_embed(ts, x, emb, context=None, transformer_options={}, output_shape=None, time_context=None, num_video_frames=None, image_only_indicator=None): + for layer in ts: + if isinstance(layer, openaimodel.VideoResBlock): + x = layer(x, emb, num_video_frames, image_only_indicator) + elif isinstance(layer, openaimodel.TimestepBlock): + x = layer(x, emb) + elif isinstance(layer, VanillaTemporalModule): + x = layer(x, context) + elif isinstance(layer, attention.SpatialVideoTransformer): + x = layer(x, context, time_context, num_video_frames, image_only_indicator, transformer_options) + if "transformer_index" in transformer_options: + transformer_options["transformer_index"] += 1 + if "current_index" in transformer_options: # keep this for backward compat, for now + transformer_options["current_index"] += 1 + elif isinstance(layer, attention.SpatialTransformer): + x = layer(x, context, transformer_options) + if "transformer_index" in transformer_options: + transformer_options["transformer_index"] += 1 + if "current_index" in transformer_options: # keep this for backward compat, for now + transformer_options["current_index"] += 1 + elif isinstance(layer, openaimodel.Upsample): + x = layer(x, output_shape=output_shape) + else: + x = layer(x) + return x + return forward_timestep_embed + + +def unlimited_memory_required(*args, **kwargs): + return 0 + + +def groupnorm_mm_factory(params: InjectionParams, manual_cast=False): + def groupnorm_mm_forward(self, input: Tensor) -> Tensor: + # axes_factor normalizes batch based on total conds and unconds passed in batch; + # the conds and unconds per batch can change based on VRAM optimizations that may kick in + if not params.is_using_sliding_context(): + batched_conds = input.size(0)//params.full_length + else: + batched_conds = input.size(0)//params.context_options.context_length + + input = rearrange(input, "(b f) c h w -> b c f h w", b=batched_conds) + if manual_cast: + weight, bias = comfy.ops.cast_bias_weight(self, input) + else: + weight, bias = self.weight, self.bias + input = group_norm(input, self.num_groups, weight, bias, self.eps) + input = rearrange(input, "b c f h w -> (b f) c h w", b=batched_conds) + return input + return groupnorm_mm_forward + + +def get_additional_models_factory(orig_get_additional_models: Callable, motion_models: MotionModelGroup): + def get_additional_models_with_motion(*args, **kwargs): + models, inference_memory = orig_get_additional_models(*args, **kwargs) + if motion_models is not None: + for motion_model in motion_models.models: + models.append(motion_model) + # TODO: account for inference memory as well? + return models, inference_memory + return get_additional_models_with_motion +###################################################################### +################################################################################## + + +def apply_params_to_motion_models(motion_models: MotionModelGroup, params: InjectionParams): + params = params.clone() + for context in params.context_options.contexts: + if context.context_schedule == ContextSchedules.VIEW_AS_CONTEXT: + context.context_length = params.full_length + # TODO: check (and message) should be different based on use_on_equal_length setting + if params.context_options.context_length: + pass + + allow_equal = params.context_options.use_on_equal_length + if params.context_options.context_length: + enough_latents = params.full_length >= params.context_options.context_length if allow_equal else params.full_length > params.context_options.context_length + else: + enough_latents = False + if params.context_options.context_length and enough_latents: + logger.info(f"Sliding context window activated - latents passed in ({params.full_length}) greater than context_length {params.context_options.context_length}.") + else: + logger.info(f"Regular AnimateDiff activated - latents passed in ({params.full_length}) less or equal to context_length {params.context_options.context_length}.") + params.reset_context() + if motion_models is not None: + # if no context_length, treat video length as intended AD frame window + if not params.context_options.context_length: + for motion_model in motion_models.models: + if not motion_model.model.is_length_valid_for_encoding_max_len(params.full_length): + raise ValueError(f"Without a context window, AnimateDiff model {motion_model.model.mm_info.mm_name} has upper limit of {motion_model.model.encoding_max_len} frames, but received {params.full_length} latents.") + motion_models.set_video_length(params.full_length, params.full_length) + # otherwise, treat context_length as intended AD frame window + else: + for motion_model in motion_models.models: + view_options = params.context_options.view_options + context_length = view_options.context_length if view_options else params.context_options.context_length + if not motion_model.model.is_length_valid_for_encoding_max_len(context_length): + raise ValueError(f"AnimateDiff model {motion_model.model.mm_info.mm_name} has upper limit of {motion_model.model.encoding_max_len} frames for a context window, but received context length of {params.context_options.context_length}.") + motion_models.set_video_length(params.context_options.context_length, params.full_length) + # inject model + module_str = "modules" if len(motion_models.models) > 1 else "module" + logger.info(f"Using motion {module_str} {motion_models.get_name_string(show_version=True)}.") + return params + + +class FunctionInjectionHolder: + def __init__(self): + pass + + def inject_functions(self, model: ModelPatcherAndInjector, params: InjectionParams): + # Save Original Functions + self.orig_forward_timestep_embed = openaimodel.forward_timestep_embed # needed to account for VanillaTemporalModule + self.orig_memory_required = model.model.memory_required # allows for "unlimited area hack" to prevent halving of conds/unconds + self.orig_groupnorm_forward = torch.nn.GroupNorm.forward # used to normalize latents to remove "flickering" of colors/brightness between frames + self.orig_groupnorm_manual_cast_forward = comfy.ops.manual_cast.GroupNorm.forward_comfy_cast_weights + self.orig_sampling_function = comfy.samplers.sampling_function # used to support sliding context windows in samplers + self.orig_prepare_mask = comfy.sample.prepare_mask + self.orig_get_additional_models = comfy.sample.get_additional_models + # Inject Functions + openaimodel.forward_timestep_embed = forward_timestep_embed_factory() + if params.unlimited_area_hack: + model.model.memory_required = unlimited_memory_required + if model.motion_models is not None: + # only apply groupnorm hack if not [v3 or ([not Hotshot] and SD1.5 and v2 and apply_v2_properly)] + info: AnimateDiffInfo = model.motion_models[0].model.mm_info + if not (info.mm_version == AnimateDiffVersion.V3 or + (info.mm_format not in [AnimateDiffFormat.HOTSHOTXL] and info.sd_type == ModelTypeSD.SD1_5 and info.mm_version == AnimateDiffVersion.V2 and params.apply_v2_properly)): + torch.nn.GroupNorm.forward = groupnorm_mm_factory(params) + comfy.ops.manual_cast.GroupNorm.forward_comfy_cast_weights = groupnorm_mm_factory(params, manual_cast=True) + # if mps device (Apple Silicon), disable batched conds to avoid black images with groupnorm hack + try: + if model.load_device.type == "mps": + model.model.memory_required = unlimited_memory_required + except Exception: + pass + del info + comfy.samplers.sampling_function = evolved_sampling_function + comfy.sample.prepare_mask = prepare_mask_ad + comfy.sample.get_additional_models = get_additional_models_factory(self.orig_get_additional_models, model.motion_models) + + def restore_functions(self, model: ModelPatcherAndInjector): + # Restoration + try: + model.model.memory_required = self.orig_memory_required + openaimodel.forward_timestep_embed = self.orig_forward_timestep_embed + torch.nn.GroupNorm.forward = self.orig_groupnorm_forward + comfy.ops.manual_cast.GroupNorm.forward_comfy_cast_weights = self.orig_groupnorm_manual_cast_forward + comfy.samplers.sampling_function = self.orig_sampling_function + comfy.sample.prepare_mask = self.orig_prepare_mask + comfy.sample.get_additional_models = self.orig_get_additional_models + except AttributeError: + logger.error("Encountered AttributeError while attempting to restore functions - likely, an error occured while trying " + \ + "to save original functions before injection, and a more specific error was thrown by ComfyUI.") + + +def motion_sample_factory(orig_comfy_sample: Callable, is_custom: bool=False) -> Callable: + def motion_sample(model: ModelPatcherAndInjector, noise: Tensor, *args, **kwargs): + # check if model is intended for injecting + if type(model) != ModelPatcherAndInjector: + return orig_comfy_sample(model, noise, *args, **kwargs) + # otherwise, injection time + latents = None + cached_latents = None + cached_noise = None + function_injections = FunctionInjectionHolder() + try: + if model.sample_settings.custom_cfg is not None: + model = model.sample_settings.custom_cfg.patch_model(model) + # clone params from model + params = model.motion_injection_params.clone() + # get amount of latents passed in, and store in params + latents: Tensor = args[-1] + params.full_length = latents.size(0) + # reset global state + ADGS.reset() + + # apply custom noise, if needed + disable_noise = kwargs.get("disable_noise") or False + seed = kwargs["seed"] + + # apply params to motion model + params = apply_params_to_motion_models(model.motion_models, params) + + # store and inject functions + function_injections.inject_functions(model, params) + + # prepare noise_extra_args for noise generation purposes + noise_extra_args = {"disable_noise": disable_noise} + params.set_noise_extra_args(noise_extra_args) + # if noise is not disabled, do noise stuff + if not disable_noise: + noise = model.sample_settings.prepare_noise(seed, latents, noise, extra_args=noise_extra_args, force_create_noise=False) + + # callback setup + original_callback = kwargs.get("callback", None) + def ad_callback(step, x0, x, total_steps): + if original_callback is not None: + original_callback(step, x0, x, total_steps) + # update GLOBALSTATE for next iteration + ADGS.current_step = ADGS.start_step + step + 1 + kwargs["callback"] = ad_callback + ADGS.motion_models = model.motion_models + ADGS.sample_settings = model.sample_settings + + # apply adapt_denoise_steps + args = list(args) + if model.sample_settings.adapt_denoise_steps and not is_custom: + # only applicable when denoise and steps are provided (from simple KSampler nodes) + denoise = kwargs.get("denoise", None) + steps = args[0] + if denoise is not None and type(steps) == int: + args[0] = max(int(denoise * steps), 1) + + + iter_opts = IterationOptions() + if model.sample_settings is not None: + iter_opts = model.sample_settings.iteration_opts + iter_opts.initialize(latents) + # cache initial noise and latents, if needed + if iter_opts.cache_init_latents: + cached_latents = latents.clone() + if iter_opts.cache_init_noise: + cached_noise = noise.clone() + # prepare iter opts preprocess kwargs, if needed + iter_kwargs = {} + if iter_opts.need_sampler: + # -5 for sampler_name (not custom) and sampler (custom) + model_management.load_model_gpu(model) + if is_custom: + iter_kwargs[IterationOptions.SAMPLER] = None #args[-5] + else: + iter_kwargs[IterationOptions.SAMPLER] = comfy.samplers.KSampler( + model.model, steps=999, #steps=args[-7], + device=model.current_device, sampler=args[-5], + scheduler=args[-4], denoise=kwargs.get("denoise", None), + model_options=model.model_options) + + for curr_i in range(iter_opts.iterations): + # handle GLOBALSTATE vars and step tally + ADGS.update_with_inject_params(params) + ADGS.start_step = kwargs.get("start_step") or 0 + ADGS.current_step = ADGS.start_step + ADGS.last_step = kwargs.get("last_step") or 0 + if iter_opts.iterations > 1: + logger.info(f"Iteration {curr_i+1}/{iter_opts.iterations}") + # perform any iter_opts preprocessing on latents + latents, noise = iter_opts.preprocess_latents(curr_i=curr_i, model=model, latents=latents, noise=noise, + cached_latents=cached_latents, cached_noise=cached_noise, + seed=seed, + sample_settings=model.sample_settings, noise_extra_args=noise_extra_args, + **iter_kwargs) + args[-1] = latents + + if model.motion_models is not None: + model.motion_models.pre_run(model) + if model.sample_settings is not None: + model.sample_settings.pre_run(model) + latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, *args, **kwargs) + return latents + finally: + del latents + del noise + del cached_latents + del cached_noise + # reset global state + ADGS.reset() + # restore injected functions + function_injections.restore_functions(model) + del function_injections + return motion_sample + + +def evolved_sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options: dict={}, seed=None): + ADGS.initialize(model) + if ADGS.motion_models is not None: + ADGS.motion_models.prepare_current_keyframe(t=timestep) + if ADGS.params.context_options is not None: + ADGS.params.context_options.prepare_current_context(t=timestep) + if ADGS.sample_settings.custom_cfg is not None: + ADGS.sample_settings.custom_cfg.prepare_current_keyframe(t=timestep) + + # never use cfg1 optimization if using custom_cfg (since can have timesteps and such) + if ADGS.sample_settings.custom_cfg is None and math.isclose(cond_scale, 1.0) and model_options.get("disable_cfg1_optimization", False) == False: + uncond_ = None + else: + uncond_ = uncond + + # add AD/evolved-sampling params to model_options (transformer_options) + model_options = model_options.copy() + if "tranformer_options" not in model_options: + model_options["tranformer_options"] = {} + model_options["transformer_options"]["ad_params"] = ADGS.create_exposed_params() + + if not ADGS.is_using_sliding_context(): + cond_pred, uncond_pred = comfy.samplers.calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) + else: + cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) + + if "sampler_cfg_function" in model_options: + args = {"cond": x - cond_pred, "uncond": x - uncond_pred, "cond_scale": cond_scale, "timestep": timestep, "input": x, "sigma": timestep, + "cond_denoised": cond_pred, "uncond_denoised": uncond_pred, "model": model, "model_options": model_options} + cfg_result = x - model_options["sampler_cfg_function"](args) + else: + cfg_result = uncond_pred + (cond_pred - uncond_pred) * cond_scale + + for fn in model_options.get("sampler_post_cfg_function", []): + args = {"denoised": cfg_result, "cond": cond, "uncond": uncond, "model": model, "uncond_denoised": uncond_pred, "cond_denoised": cond_pred, + "sigma": timestep, "model_options": model_options, "input": x} + cfg_result = fn(args) + + return cfg_result + + +# sliding_calc_cond_uncond_batch inspired by ashen's initial hack for 16-frame sliding context: +# https://github.com/comfyanonymous/ComfyUI/compare/master...ashen-sensored:ComfyUI:master +def sliding_calc_cond_uncond_batch(model, cond, uncond, x_in: Tensor, timestep, model_options): + def prepare_control_objects(control: ControlBase, full_idxs: list[int]): + if control.previous_controlnet is not None: + prepare_control_objects(control.previous_controlnet, full_idxs) + control.sub_idxs = full_idxs + control.full_latent_length = ADGS.params.full_length + control.context_length = ADGS.params.context_options.context_length + + def get_resized_cond(cond_in, full_idxs) -> list: + # reuse or resize cond items to match context requirements + resized_cond = [] + # cond object is a list containing a dict - outer list is irrelevant, so just loop through it + for actual_cond in cond_in: + resized_actual_cond = actual_cond.copy() + # now we are in the inner dict - "pooled_output" is a tensor, "control" is a ControlBase object, "model_conds" is dictionary + for key in actual_cond: + try: + cond_item = actual_cond[key] + if isinstance(cond_item, Tensor): + # check that tensor is the expected length - x.size(0) + if cond_item.size(0) == x_in.size(0): + # if so, it's subsetting time - tell controls the expected indeces so they can handle them + actual_cond_item = cond_item[full_idxs] + resized_actual_cond[key] = actual_cond_item + else: + resized_actual_cond[key] = cond_item + # look for control + elif key == "control": + control_item = cond_item + if hasattr(control_item, "sub_idxs"): + prepare_control_objects(control_item, full_idxs) + else: + raise ValueError(f"Control type {type(control_item).__name__} may not support required features for sliding context window; \ + use Control objects from Kosinkadink/ComfyUI-Advanced-ControlNet nodes, or make sure Advanced-ControlNet is updated.") + resized_actual_cond[key] = control_item + del control_item + elif isinstance(cond_item, dict): + new_cond_item = cond_item.copy() + # when in dictionary, look for tensors and CONDCrossAttn [comfy/conds.py] (has cond attr that is a tensor) + for cond_key, cond_value in new_cond_item.items(): + if isinstance(cond_value, Tensor): + if cond_value.size(0) == x_in.size(0): + new_cond_item[cond_key] = cond_value[full_idxs] + # if has cond that is a Tensor, check if needs to be subset + elif hasattr(cond_value, "cond") and isinstance(cond_value.cond, Tensor): + if cond_value.cond.size(0) == x_in.size(0): + new_cond_item[cond_key] = cond_value._copy_with(cond_value.cond[full_idxs]) + resized_actual_cond[key] = new_cond_item + else: + resized_actual_cond[key] = cond_item + finally: + del cond_item # just in case to prevent VRAM issues + resized_cond.append(resized_actual_cond) + return resized_cond + + # get context windows + ADGS.params.context_options.step = ADGS.current_step + context_windows = get_context_windows(ADGS.params.full_length, ADGS.params.context_options) + # figure out how input is split + batched_conds = x_in.size(0)//ADGS.params.full_length + + if ADGS.motion_models is not None: + ADGS.motion_models.set_view_options(ADGS.params.context_options.view_options) + + # prepare final cond, uncond, and out_count + cond_final = torch.zeros_like(x_in) + uncond_final = torch.zeros_like(x_in) + out_count_final = torch.zeros((x_in.shape[0], 1, 1, 1), device=x_in.device) + bias_final = [0.0] * x_in.shape[0] + + # perform calc_cond_uncond_batch per context window + for ctx_idxs in context_windows: + ADGS.params.sub_idxs = ctx_idxs + if ADGS.motion_models is not None: + ADGS.motion_models.set_sub_idxs(ctx_idxs) + ADGS.motion_models.set_video_length(len(ctx_idxs), ADGS.params.full_length) + # update exposed params + model_options["transformer_options"]["ad_params"]["sub_idxs"] = ctx_idxs + model_options["transformer_options"]["ad_params"]["context_length"] = len(ctx_idxs) + # account for all portions of input frames + full_idxs = [] + for n in range(batched_conds): + for ind in ctx_idxs: + full_idxs.append((ADGS.params.full_length*n)+ind) + # get subsections of x, timestep, cond, uncond, cond_concat + sub_x = x_in[full_idxs] + sub_timestep = timestep[full_idxs] + sub_cond = get_resized_cond(cond, full_idxs) if cond is not None else None + sub_uncond = get_resized_cond(uncond, full_idxs) if uncond is not None else None + + sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_uncond_batch(model, sub_cond, sub_uncond, sub_x, sub_timestep, model_options) + + if ADGS.params.context_options.fuse_method == ContextFuseMethod.RELATIVE: + full_length = ADGS.params.full_length + for pos, idx in enumerate(ctx_idxs): + # bias is the influence of a specific index in relation to the whole context window + bias = 1 - abs(idx - (ctx_idxs[0] + ctx_idxs[-1]) / 2) / ((ctx_idxs[-1] - ctx_idxs[0] + 1e-2) / 2) + bias = max(1e-2, bias) + # take weighted average relative to total bias of current idx + # and account for batched_conds + for n in range(batched_conds): + bias_total = bias_final[(full_length*n)+idx] + prev_weight = (bias_total / (bias_total + bias)) + new_weight = (bias / (bias_total + bias)) + cond_final[(full_length*n)+idx] = cond_final[(full_length*n)+idx] * prev_weight + sub_cond_out[(full_length*n)+pos] * new_weight + uncond_final[(full_length*n)+idx] = uncond_final[(full_length*n)+idx] * prev_weight + sub_uncond_out[(full_length*n)+pos] * new_weight + bias_final[(full_length*n)+idx] = bias_total + bias + else: + # add conds and counts based on weights of fuse method + weights = get_context_weights(len(ctx_idxs), ADGS.params.context_options.fuse_method) * batched_conds + weights_tensor = torch.Tensor(weights).to(device=x_in.device).unsqueeze(-1).unsqueeze(-1).unsqueeze(-1) + cond_final[full_idxs] += sub_cond_out * weights_tensor + uncond_final[full_idxs] += sub_uncond_out * weights_tensor + out_count_final[full_idxs] += weights_tensor + + if ADGS.params.context_options.fuse_method == ContextFuseMethod.RELATIVE: + # already normalized, so return as is + del out_count_final + return cond_final, uncond_final + else: + # normalize cond and uncond via division by context usage counts + cond_final /= out_count_final + uncond_final /= out_count_final + del out_count_final + return cond_final, uncond_final diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_model.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_model.py new file mode 100644 index 0000000000000000000000000000000000000000..21a91cfe128e388ac6cdc7749c78156dd49ef2a4 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_model.py @@ -0,0 +1,417 @@ +import hashlib +from pathlib import Path +from typing import Callable, Union +from collections.abc import Iterable +from time import time +import copy + +import torch +import numpy as np + +import folder_paths +from comfy.model_base import SD21UNCLIP, SDXL, BaseModel, SDXLRefiner, SVD_img2vid, model_sampling, ModelType +from comfy.model_management import xformers_enabled +from comfy.model_patcher import ModelPatcher + +import comfy.model_sampling +import comfy_extras.nodes_model_advanced + + +BIGMIN = -(2**53-1) +BIGMAX = (2**53-1) + + +class ModelSamplingConfig: + def __init__(self, beta_schedule: str, linear_start: float=None, linear_end: float=None): + self.sampling_settings = {"beta_schedule": beta_schedule} + if linear_start is not None: + self.sampling_settings["linear_start"] = linear_start + if linear_end is not None: + self.sampling_settings["linear_end"] = linear_end + self.beta_schedule = beta_schedule # keeping this for backwards compatibility + + +class ModelSamplingType: + EPS = "eps" + V_PREDICTION = "v_prediction" + LCM = "lcm" + + _NON_LCM_LIST = [EPS, V_PREDICTION] + _FULL_LIST = [EPS, V_PREDICTION, LCM] + + MAP = { + EPS: ModelType.EPS, + V_PREDICTION: ModelType.V_PREDICTION, + LCM: comfy_extras.nodes_model_advanced.LCM, + } + + @classmethod + def from_alias(cls, alias: str): + return cls.MAP[alias] + + +def factory_model_sampling_discrete_distilled(original_timesteps=50): + class ModelSamplingDiscreteDistilledEvolved(comfy_extras.nodes_model_advanced.ModelSamplingDiscreteDistilled): + def __init__(self, *args, **kwargs): + self.original_timesteps = original_timesteps # normal LCM has 50 + super().__init__(*args, **kwargs) + return ModelSamplingDiscreteDistilledEvolved + + +# based on code in comfy_extras/nodes_model_advanced.py +def evolved_model_sampling(model_config: ModelSamplingConfig, model_type: ModelType, alias: str, original_timesteps: int=None): + # if LCM, need to handle manually + if BetaSchedules.is_lcm(alias) or original_timesteps is not None: + sampling_type = comfy_extras.nodes_model_advanced.LCM + if original_timesteps is not None: + sampling_base = factory_model_sampling_discrete_distilled(original_timesteps=original_timesteps) + elif alias == BetaSchedules.LCM_100: + sampling_base = factory_model_sampling_discrete_distilled(original_timesteps=100) + elif alias == BetaSchedules.LCM_25: + sampling_base = factory_model_sampling_discrete_distilled(original_timesteps=25) + else: + sampling_base = comfy_extras.nodes_model_advanced.ModelSamplingDiscreteDistilled + class ModelSamplingAdvancedEvolved(sampling_base, sampling_type): + pass + # NOTE: if I want to support zsnr, this is where I would add that code + return ModelSamplingAdvancedEvolved(model_config) + # otherwise, use vanilla model_sampling function + return model_sampling(model_config, model_type) + + +class BetaSchedules: + AUTOSELECT = "autoselect" + SQRT_LINEAR = "sqrt_linear (AnimateDiff)" + LINEAR_ADXL = "linear (AnimateDiff-SDXL)" + LINEAR = "linear (HotshotXL/default)" + AVG_LINEAR_SQRT_LINEAR = "avg(sqrt_linear,linear)" + LCM_AVG_LINEAR_SQRT_LINEAR = "lcm avg(sqrt_linear,linear)" + LCM = "lcm" + LCM_100 = "lcm[100_ots]" + LCM_25 = "lcm[25_ots]" + LCM_SQRT_LINEAR = "lcm >> sqrt_linear" + USE_EXISTING = "use existing" + SQRT = "sqrt" + COSINE = "cosine" + SQUAREDCOS_CAP_V2 = "squaredcos_cap_v2" + RAW_LINEAR = "linear" + RAW_SQRT_LINEAR = "sqrt_linear" + + RAW_BETA_SCHEDULE_LIST = [RAW_LINEAR, RAW_SQRT_LINEAR, SQRT, COSINE, SQUAREDCOS_CAP_V2] + + ALIAS_LCM_LIST = [LCM, LCM_100, LCM_25, LCM_SQRT_LINEAR] + + ALIAS_ACTIVE_LIST = [SQRT_LINEAR, LINEAR_ADXL, LINEAR, AVG_LINEAR_SQRT_LINEAR, LCM_AVG_LINEAR_SQRT_LINEAR, LCM, LCM_100, LCM_SQRT_LINEAR, # LCM_25 is purposely omitted + SQRT, COSINE, SQUAREDCOS_CAP_V2] + + ALIAS_LIST = [AUTOSELECT, USE_EXISTING] + ALIAS_ACTIVE_LIST + + + + ALIAS_MAP = { + SQRT_LINEAR: "sqrt_linear", + LINEAR_ADXL: "linear", # also linear, but has different linear_end (0.020) + LINEAR: "linear", + LCM_100: "linear", # distilled, 100 original timesteps + LCM_25: "linear", # distilled, 25 original timesteps + LCM: "linear", # distilled + LCM_SQRT_LINEAR: "sqrt_linear", # distilled, sqrt_linear + SQRT: "sqrt", + COSINE: "cosine", + SQUAREDCOS_CAP_V2: "squaredcos_cap_v2", + RAW_LINEAR: "linear", + RAW_SQRT_LINEAR: "sqrt_linear" + } + + @classmethod + def is_lcm(cls, alias: str): + return alias in cls.ALIAS_LCM_LIST + + @classmethod + def to_name(cls, alias: str): + return cls.ALIAS_MAP[alias] + + @classmethod + def to_config(cls, alias: str) -> ModelSamplingConfig: + linear_start = None + linear_end = None + if alias == cls.LINEAR_ADXL: + # uses linear_end=0.020 + linear_end = 0.020 + return ModelSamplingConfig(cls.to_name(alias), linear_start=linear_start, linear_end=linear_end) + + @classmethod + def _to_model_sampling(cls, alias: str, model_type: ModelType, config_override: ModelSamplingConfig=None, original_timesteps: int=None): + if alias == cls.USE_EXISTING: + return None + elif config_override != None: + return evolved_model_sampling(config_override, model_type=model_type, alias=alias, original_timesteps=original_timesteps) + elif alias == cls.AVG_LINEAR_SQRT_LINEAR: + ms_linear = evolved_model_sampling(cls.to_config(cls.LINEAR), model_type=model_type, alias=cls.LINEAR) + ms_sqrt_linear = evolved_model_sampling(cls.to_config(cls.SQRT_LINEAR), model_type=model_type, alias=cls.SQRT_LINEAR) + avg_sigmas = (ms_linear.sigmas + ms_sqrt_linear.sigmas) / 2 + ms_linear.set_sigmas(avg_sigmas) + return ms_linear + elif alias == cls.LCM_AVG_LINEAR_SQRT_LINEAR: + ms_linear = evolved_model_sampling(cls.to_config(cls.LCM), model_type=model_type, alias=cls.LCM) + ms_sqrt_linear = evolved_model_sampling(cls.to_config(cls.LCM_SQRT_LINEAR), model_type=model_type, alias=cls.LCM_SQRT_LINEAR) + avg_sigmas = (ms_linear.sigmas + ms_sqrt_linear.sigmas) / 2 + ms_linear.set_sigmas(avg_sigmas) + return ms_linear + # average out the sigmas + ms_obj = evolved_model_sampling(cls.to_config(alias), model_type=model_type, alias=alias, original_timesteps=original_timesteps) + return ms_obj + + @classmethod + def to_model_sampling(cls, alias: str, model: ModelPatcher): + return cls._to_model_sampling(alias=alias, model_type=model.model.model_type) + + @staticmethod + def get_alias_list_with_first_element(first_element: str): + new_list = BetaSchedules.ALIAS_LIST.copy() + element_index = new_list.index(first_element) + new_list[0], new_list[element_index] = new_list[element_index], new_list[0] + return new_list + + +class SigmaSchedule: + def __init__(self, model_sampling: comfy.model_sampling.ModelSamplingDiscrete, model_type: ModelType): + self.model_sampling = model_sampling + #self.config = config + self.model_type = model_type + self.original_timesteps = getattr(self.model_sampling, "original_timesteps", None) + + def is_lcm(self): + return self.original_timesteps is not None + + def total_sigmas(self): + return len(self.model_sampling.sigmas) + + def clone(self) -> 'SigmaSchedule': + new_model_sampling = copy.deepcopy(self.model_sampling) + #new_config = copy.deepcopy(self.config) + return SigmaSchedule(model_sampling=new_model_sampling, model_type=self.model_type) + + # def clone(self): + # pass + + @staticmethod + def apply_zsnr(new_model_sampling: comfy.model_sampling.ModelSamplingDiscrete): + new_model_sampling.set_sigmas(comfy_extras.nodes_model_advanced.rescale_zero_terminal_snr_sigmas(new_model_sampling.sigmas)) + + # def get_lcmified(self, original_timesteps=50, zsnr=False) -> 'SigmaSchedule': + # new_model_sampling = evolved_model_sampling(model_config=self.config, model_type=self.model_type, alias=None, original_timesteps=original_timesteps) + # if zsnr: + # new_model_sampling.set_sigmas(comfy_extras.nodes_model_advanced.rescale_zero_terminal_snr_sigmas(new_model_sampling.sigmas)) + # return SigmaSchedule(model_sampling=new_model_sampling, config=self.config, model_type=self.model_type, is_lcm=True) + + +class InterpolationMethod: + LINEAR = "linear" + EASE_IN = "ease_in" + EASE_OUT = "ease_out" + EASE_IN_OUT = "ease_in_out" + + _LIST = [LINEAR, EASE_IN, EASE_OUT, EASE_IN_OUT] + + @classmethod + def get_weights(cls, num_from: float, num_to: float, length: int, method: str, reverse=False): + diff = num_to - num_from + if method == cls.LINEAR: + weights = torch.linspace(num_from, num_to, length) + elif method == cls.EASE_IN: + index = torch.linspace(0, 1, length) + weights = diff * np.power(index, 2) + num_from + elif method == cls.EASE_OUT: + index = torch.linspace(0, 1, length) + weights = diff * (1 - np.power(1 - index, 2)) + num_from + elif method == cls.EASE_IN_OUT: + index = torch.linspace(0, 1, length) + weights = diff * ((1 - np.cos(index * np.pi)) / 2) + num_from + else: + raise ValueError(f"Unrecognized interpolation method '{method}'.") + if reverse: + weights = weights.flip(dims=(0,)) + return weights + + +class Folders: + ANIMATEDIFF_MODELS = "animatediff_models" + MOTION_LORA = "animatediff_motion_lora" + VIDEO_FORMATS = "animatediff_video_formats" + + +def add_extension_to_folder_path(folder_name: str, extensions: Union[str, list[str]]): + if folder_name in folder_paths.folder_names_and_paths: + if isinstance(extensions, str): + folder_paths.folder_names_and_paths[folder_name][1].add(extensions) + elif isinstance(extensions, Iterable): + for ext in extensions: + folder_paths.folder_names_and_paths[folder_name][1].add(ext) + + +def try_mkdir(full_path: str): + try: + Path(full_path).mkdir() + except Exception: + pass + + +# register motion models folder(s) +folder_paths.add_model_folder_path(Folders.ANIMATEDIFF_MODELS, str(Path(__file__).parent.parent / "models")) +folder_paths.add_model_folder_path(Folders.ANIMATEDIFF_MODELS, str(Path(folder_paths.models_dir) / Folders.ANIMATEDIFF_MODELS)) +add_extension_to_folder_path(Folders.ANIMATEDIFF_MODELS, folder_paths.supported_pt_extensions) +try_mkdir(str(Path(folder_paths.models_dir) / Folders.ANIMATEDIFF_MODELS)) + +# register motion LoRA folder(s) +folder_paths.add_model_folder_path(Folders.MOTION_LORA, str(Path(__file__).parent.parent / "motion_lora")) +folder_paths.add_model_folder_path(Folders.MOTION_LORA, str(Path(folder_paths.models_dir) / Folders.MOTION_LORA)) +add_extension_to_folder_path(Folders.MOTION_LORA, folder_paths.supported_pt_extensions) +try_mkdir(str(Path(folder_paths.models_dir) / Folders.MOTION_LORA)) + +# register video_formats folder +folder_paths.add_model_folder_path(Folders.VIDEO_FORMATS, str(Path(__file__).parent.parent / "video_formats")) +add_extension_to_folder_path(Folders.VIDEO_FORMATS, ".json") + + +def get_available_motion_models(): + return folder_paths.get_filename_list(Folders.ANIMATEDIFF_MODELS) + + +def get_motion_model_path(model_name: str): + return folder_paths.get_full_path(Folders.ANIMATEDIFF_MODELS, model_name) + + +def get_available_motion_loras(): + return folder_paths.get_filename_list(Folders.MOTION_LORA) + + +def get_motion_lora_path(lora_name: str): + return folder_paths.get_full_path(Folders.MOTION_LORA, lora_name) + + +# modified from https://stackoverflow.com/questions/22058048/hashing-a-file-in-python +def calculate_file_hash(filename: str, hash_every_n: int = 50): + h = hashlib.sha256() + b = bytearray(1024*1024) + mv = memoryview(b) + with open(filename, 'rb', buffering=0) as f: + i = 0 + # don't hash entire file, only portions of it + while n := f.readinto(mv): + if i%hash_every_n == 0: + h.update(mv[:n]) + i += 1 + return h.hexdigest() + + +def calculate_model_hash(model: ModelPatcher): + unet = model.model.diff + t = unet.input_blocks[1] + m = hashlib.sha256() + for buf in t.buffers(): + m.update(buf.cpu().numpy().view(np.uint8)) + return m.hexdigest() + + +class ModelTypeSD: + SD1_5 = "SD1.5" + SD2_1 = "SD2.1" + SDXL = "SDXL" + SDXL_REFINER = "SDXL_Refiner" + SVD = "SVD" + + +def get_sd_model_type(model: ModelPatcher) -> str: + if model is None: + return None + elif type(model.model) == BaseModel: + return ModelTypeSD.SD1_5 + elif type(model.model) == SDXL: + return ModelTypeSD.SDXL + elif type(model.model) == SD21UNCLIP: + return ModelTypeSD.SD2_1 + elif type(model.model) == SDXLRefiner: + return ModelTypeSD.SDXL_REFINER + elif type(model.model) == SVD_img2vid: + return ModelTypeSD.SVD + else: + return str(type(model.model).__name__) + +def is_checkpoint_sd1_5(model: ModelPatcher): + return False if model is None else type(model.model) == BaseModel + +def is_checkpoint_sdxl(model: ModelPatcher): + return False if model is None else type(model.model) == SDXL + + +def raise_if_not_checkpoint_sd1_5(model: ModelPatcher): + if not is_checkpoint_sd1_5(model): + raise ValueError(f"For AnimateDiff, SD Checkpoint (model) is expected to be SD1.5-based (BaseModel), but was: {type(model.model).__name__}") + + +# TODO: remove this filth when xformers bug gets fixed in future xformers version +def wrap_function_to_inject_xformers_bug_info(function_to_wrap: Callable) -> Callable: + if not xformers_enabled: + return function_to_wrap + else: + def wrapped_function(*args, **kwargs): + try: + return function_to_wrap(*args, **kwargs) + except RuntimeError as e: + if str(e).startswith("CUDA error: invalid configuration argument"): + raise RuntimeError(f"An xformers bug was encountered in AnimateDiff - this is unexpected, \ + report this to Kosinkadink/ComfyUI-AnimateDiff-Evolved repo as an issue, \ + and a workaround for now is to run ComfyUI with the --disable-xformers argument.") + raise + return wrapped_function + + +class Timer(object): + __slots__ = ("start_time", "end_time") + + def __init__(self) -> None: + self.start_time = 0.0 + self.end_time = 0.0 + + def start(self) -> None: + self.start_time = time() + + def update(self) -> None: + self.start() + + def stop(self) -> float: + self.end_time = time() + return self.get_time_diff() + + def get_time_diff(self) -> float: + return self.end_time - self.start_time + + def get_time_current(self) -> float: + return time() - self.start_time + + +# TODO: possibly add configuration file in future when needed? +# # Load config settings +# ADE_DIR = Path(__file__).parent.parent +# ADE_CONFIG_FILE = ADE_DIR / "ade_config.json" + +# class ADE_Settings: +# USE_XFORMERS_IN_VERSATILE_ATTENTION = "use_xformers_in_VersatileAttention" + +# # Create ADE config if not present +# ABS_CONFIG = { +# ADE_Settings.USE_XFORMERS_IN_VERSATILE_ATTENTION: True +# } +# if not ADE_CONFIG_FILE.exists(): +# with ADE_CONFIG_FILE.open("w") as f: +# json.dumps(ABS_CONFIG, indent=4) +# # otherwise, load it and use values +# else: +# loaded_values: dict = None +# with ADE_CONFIG_FILE.open("r") as f: +# loaded_values = json.load(f) +# if loaded_values is not None: +# for key, value in loaded_values.items(): +# if key in ABS_CONFIG: +# ABS_CONFIG[key] = value diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_motion.py b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_motion.py new file mode 100644 index 0000000000000000000000000000000000000000..35f8129d4bc65c3ee8208e67cce7ad97ca996506 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/utils_motion.py @@ -0,0 +1,230 @@ +from typing import Union +import torch +import torch.nn.functional as F +from torch import Tensor, nn + +import comfy.model_management as model_management +import comfy.ops +import comfy.utils +from comfy.cli_args import args +from comfy.ldm.modules.attention import attention_basic, attention_pytorch, attention_split, attention_sub_quad, default + +from .logger import logger + + +# until xformers bug is fixed, do not use xformers for VersatileAttention! TODO: change this when fix is out +# logic for choosing optimized_attention method taken from comfy/ldm/modules/attention.py +optimized_attention_mm = attention_basic +if model_management.xformers_enabled(): + pass + #optimized_attention_mm = attention_xformers +if model_management.pytorch_attention_enabled(): + optimized_attention_mm = attention_pytorch +else: + if args.use_split_cross_attention: + optimized_attention_mm = attention_split + else: + optimized_attention_mm = attention_sub_quad + + +class CrossAttentionMM(nn.Module): + def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0., dtype=None, device=None, + operations=comfy.ops.disable_weight_init): + super().__init__() + inner_dim = dim_head * heads + context_dim = default(context_dim, query_dim) + + self.heads = heads + self.dim_head = dim_head + self.scale = None + self.default_scale = dim_head ** -0.5 + + self.to_q = operations.Linear(query_dim, inner_dim, bias=False, dtype=dtype, device=device) + self.to_k = operations.Linear(context_dim, inner_dim, bias=False, dtype=dtype, device=device) + self.to_v = operations.Linear(context_dim, inner_dim, bias=False, dtype=dtype, device=device) + + self.to_out = nn.Sequential(operations.Linear(inner_dim, query_dim, dtype=dtype, device=device), nn.Dropout(dropout)) + + def forward(self, x, context=None, value=None, mask=None, scale_mask=None): + q = self.to_q(x) + context = default(context, x) + k: Tensor = self.to_k(context) + if value is not None: + v = self.to_v(value) + del value + else: + v = self.to_v(context) + + # apply custom scale by multiplying k by scale factor + if self.scale is not None: + k *= self.scale + + # apply scale mask, if present + if scale_mask is not None: + k *= scale_mask + + out = optimized_attention_mm(q, k, v, self.heads, mask) + return self.to_out(out) + +# TODO: set up comfy.ops style classes for groupnorm and other functions +class GroupNormAD(torch.nn.GroupNorm): + def __init__(self, num_groups: int, num_channels: int, eps: float = 1e-5, affine: bool = True, + device=None, dtype=None) -> None: + super().__init__(num_groups=num_groups, num_channels=num_channels, eps=eps, affine=affine, device=device, dtype=dtype) + + def forward(self, input: Tensor) -> Tensor: + return F.group_norm( + input, self.num_groups, self.weight, self.bias, self.eps) + + +# applies min-max normalization, from: +# https://stackoverflow.com/questions/68791508/min-max-normalization-of-a-tensor-in-pytorch +def normalize_min_max(x: Tensor, new_min = 0.0, new_max = 1.0): + return linear_conversion(x, x_min=x.min(), x_max=x.max(), new_min=new_min, new_max=new_max) + + +def linear_conversion(x, x_min=0.0, x_max=1.0, new_min=0.0, new_max=1.0): + x_min = float(x_min) + x_max = float(x_max) + new_min = float(new_min) + new_max = float(new_max) + return (((x - x_min)/(x_max - x_min)) * (new_max - new_min)) + new_min + + +# adapted from comfy/sample.py +def prepare_mask_batch(mask: Tensor, shape: Tensor, multiplier: int=1, match_dim1=False): + mask = mask.clone() + mask = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(shape[2]*multiplier, shape[3]*multiplier), mode="bilinear") + if match_dim1: + mask = torch.cat([mask] * shape[1], dim=1) + return mask + + +def extend_to_batch_size(tensor: Tensor, batch_size: int): + if tensor.shape[0] > batch_size: + return tensor[:batch_size] + elif tensor.shape[0] < batch_size: + remainder = batch_size-tensor.shape[0] + return torch.cat([tensor] + [tensor[-1:]]*remainder, dim=0) + return tensor + + +def get_sorted_list_via_attr(objects: list, attr: str) -> list: + if not objects: + return objects + elif len(objects) <= 1: + return [x for x in objects] + # now that we know we have to sort, do it following these rules: + # a) if objects have same value of attribute, maintain their relative order + # b) perform sorting of the groups of objects with same attributes + unique_attrs = {} + for o in objects: + val_attr = getattr(o, attr) + attr_list = unique_attrs.get(val_attr, list()) + attr_list.append(o) + if val_attr not in unique_attrs: + unique_attrs[val_attr] = attr_list + # now that we have the unique attr values grouped together in relative order, sort them by key + sorted_attrs = dict(sorted(unique_attrs.items())) + # now flatten out the dict into a list to return + sorted_list = [] + for object_list in sorted_attrs.values(): + sorted_list.extend(object_list) + return sorted_list + + +class MotionCompatibilityError(ValueError): + pass + + +def get_combined_multival(multivalA: Union[float, Tensor], multivalB: Union[float, Tensor]) -> Union[float, Tensor]: + # if one is None, use the other + if multivalA == None: + return multivalB + elif multivalB == None: + return multivalA + # both have a value - combine them based on type + # if both are Tensors, make dims match before multiplying + if type(multivalA) == Tensor and type(multivalB) == Tensor: + areaA = multivalA.shape[1]*multivalA.shape[2] + areaB = multivalB.shape[1]*multivalB.shape[2] + # match height/width to mask with larger area + leader,follower = multivalA,multivalB if areaA >= areaB else multivalB,multivalA + batch_size = multivalA.shape[0] if multivalA.shape[0] >= multivalB.shape[0] else multivalB.shape[0] + # make follower same dimensions as leader + follower = torch.unsqueeze(follower, 1) + follower = comfy.utils.common_upscale(follower, leader.shape[2], leader.shape[1], "bilinear", "center") + follower = torch.squeeze(follower, 1) + # make sure batch size will match + leader = extend_to_batch_size(leader, batch_size) + follower = extend_to_batch_size(follower, batch_size) + return leader * follower + # otherwise, just multiply them together - one of them is a float + return multivalA * multivalB + + +class ADKeyframe: + def __init__(self, + start_percent: float = 0.0, + scale_multival: Union[float, Tensor]=None, + effect_multival: Union[float, Tensor]=None, + inherit_missing: bool=True, + guarantee_steps: int=1, + default: bool=False, + ): + self.start_percent = start_percent + self.start_t = 999999999.9 + self.scale_multival = scale_multival + self.effect_multival = effect_multival + self.inherit_missing = inherit_missing + self.guarantee_steps = guarantee_steps + self.default = default + + def has_scale(self): + return self.scale_multival is not None + + def has_effect(self): + return self.effect_multival is not None + + +class ADKeyframeGroup: + def __init__(self): + self.keyframes: list[ADKeyframe] = [] + self.keyframes.append(ADKeyframe(guarantee_steps=1, default=True)) + + def add(self, keyframe: ADKeyframe): + # remove any default keyframes that match start_percent of new keyframe + default_to_delete = [] + for i in range(len(self.keyframes)): + if self.keyframes[i].default and self.keyframes[i].start_percent == keyframe.start_percent: + default_to_delete.append(i) + for i in reversed(default_to_delete): + self.keyframes.pop(i) + # add to end of list, then sort + self.keyframes.append(keyframe) + self.keyframes = get_sorted_list_via_attr(self.keyframes, "start_percent") + + def get_index(self, index: int) -> Union[ADKeyframe, None]: + try: + return self.keyframes[index] + except IndexError: + return None + + def has_index(self, index: int) -> int: + return index >=0 and index < len(self.keyframes) + + def __getitem__(self, index) -> ADKeyframe: + return self.keyframes[index] + + def __len__(self) -> int: + return len(self.keyframes) + + def is_empty(self) -> bool: + return len(self.keyframes) == 0 + + def clone(self) -> 'ADKeyframeGroup': + cloned = ADKeyframeGroup() + for tk in self.keyframes: + if not tk.default: + cloned.add(tk) + return cloned diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/models/.gitkeep b/custom_nodes/ComfyUI-AnimateDiff-Evolved/models/.gitkeep new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/models/mm_sd_v15_v2.ckpt b/custom_nodes/ComfyUI-AnimateDiff-Evolved/models/mm_sd_v15_v2.ckpt new file mode 100644 index 0000000000000000000000000000000000000000..e52e8a28920f998f42b8fa2bc0a277f42c6eb176 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/models/mm_sd_v15_v2.ckpt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69ed0f5fef82b110aca51bcab73b21104242bc65d6ab4b8b2a2a94d31cad1bf0 +size 1817888431 diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/.gitkeep b/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/.gitkeep new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/v2_lora_ZoomIn.ckpt b/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/v2_lora_ZoomIn.ckpt new file mode 100644 index 0000000000000000000000000000000000000000..1880f544156c5edc4329c5b41e18c6542a547cf0 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora/v2_lora_ZoomIn.ckpt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70ce8b9057b173b9249c48aca5d66c8aa1d8aaa040fda394e50e37f3e278195e +size 77474499 diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/av1-webm.json b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/av1-webm.json new file mode 100644 index 0000000000000000000000000000000000000000..137ad1872a7abb36cd68e30ddc34eb4cfd14aa44 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/av1-webm.json @@ -0,0 +1,10 @@ +{ + "main_pass": + [ + "-n", "-c:v", "libsvtav1", + "-pix_fmt", "yuv420p10le", + "-crf", "23" + ], + "extension": "webm", + "environment": {"SVT_LOG": "1"} +} diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/h264-mp4.json b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/h264-mp4.json new file mode 100644 index 0000000000000000000000000000000000000000..6b50b12cca742e4d3120145e7c7fbf7e22e3bfb6 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/h264-mp4.json @@ -0,0 +1,9 @@ +{ + "main_pass": + [ + "-n", "-c:v", "libx264", + "-pix_fmt", "yuv420p", + "-crf", "19" + ], + "extension": "mp4" +} diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/h265-mp4.json b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/h265-mp4.json new file mode 100644 index 0000000000000000000000000000000000000000..a8b677bd69e6d0f6eaea75be7e104e83e278bc78 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/h265-mp4.json @@ -0,0 +1,11 @@ +{ + "main_pass": + [ + "-n", "-c:v", "libx265", + "-pix_fmt", "yuv420p10le", + "-preset", "medium", + "-crf", "22", + "-x265-params", "log-level=quiet" + ], + "extension": "mp4" +} diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/webm.json b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/webm.json new file mode 100644 index 0000000000000000000000000000000000000000..1551e2c7e89d0e3663914951a87a2db05ad638e6 --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/video_formats/webm.json @@ -0,0 +1,9 @@ +{ + "main_pass": + [ + "-n", + "-pix_fmt", "yuv420p", + "-crf", "23" + ], + "extension": "webm" +} diff --git a/custom_nodes/ComfyUI-AnimateDiff-Evolved/web/js/gif_preview.js b/custom_nodes/ComfyUI-AnimateDiff-Evolved/web/js/gif_preview.js new file mode 100644 index 0000000000000000000000000000000000000000..860876ed2d2f681e492fc7d63850834debd13b5e --- /dev/null +++ b/custom_nodes/ComfyUI-AnimateDiff-Evolved/web/js/gif_preview.js @@ -0,0 +1,142 @@ +import { app } from '../../../scripts/app.js' +import { api } from '../../../scripts/api.js' + +function offsetDOMWidget( + widget, + ctx, + node, + widgetWidth, + widgetY, + height + ) { + const margin = 10 + const elRect = ctx.canvas.getBoundingClientRect() + const transform = new DOMMatrix() + .scaleSelf( + elRect.width / ctx.canvas.width, + elRect.height / ctx.canvas.height + ) + .multiplySelf(ctx.getTransform()) + .translateSelf(0, widgetY + margin) + + const scale = new DOMMatrix().scaleSelf(transform.a, transform.d) + Object.assign(widget.inputEl.style, { + transformOrigin: '0 0', + transform: scale, + left: `${transform.e}px`, + top: `${transform.d + transform.f}px`, + width: `${widgetWidth}px`, + height: `${(height || widget.parent?.inputHeight || 32) - margin}px`, + position: 'absolute', + background: !node.color ? '' : node.color, + color: !node.color ? '' : 'white', + zIndex: 5, //app.graph._nodes.indexOf(node), + }) + } + + export const hasWidgets = (node) => { + if (!node.widgets || !node.widgets?.[Symbol.iterator]) { + return false + } + return true + } + + export const cleanupNode = (node) => { + if (!hasWidgets(node)) { + return + } + + for (const w of node.widgets) { + if (w.canvas) { + w.canvas.remove() + } + if (w.inputEl) { + w.inputEl.remove() + } + // calls the widget remove callback + w.onRemoved?.() + } + } + +const CreatePreviewElement = (name, val, format) => { + const [type] = format.split('/') + const w = { + name, + type, + value: val, + draw: function (ctx, node, widgetWidth, widgetY, height) { + const [cw, ch] = this.computeSize(widgetWidth) + offsetDOMWidget(this, ctx, node, widgetWidth, widgetY, ch) + }, + computeSize: function (_) { + const ratio = this.inputRatio || 1 + const width = Math.max(220, this.parent.size[0]) + return [width, (width / ratio + 10)] + }, + onRemoved: function () { + if (this.inputEl) { + this.inputEl.remove() + } + }, + } + + w.inputEl = document.createElement(type === 'video' ? 'video' : 'img') + w.inputEl.src = w.value + if (type === 'video') { + w.inputEl.setAttribute('type', 'video/webm'); + w.inputEl.autoplay = true + w.inputEl.loop = true + w.inputEl.controls = false; + } + w.inputEl.onload = function () { + w.inputRatio = w.inputEl.naturalWidth / w.inputEl.naturalHeight + } + document.body.appendChild(w.inputEl) + return w + } + +const gif_preview = { + name: 'AnimateDiff.gif_preview', + async beforeRegisterNodeDef(nodeType, nodeData, app) { + switch (nodeData.name) { + case 'ADE_AnimateDiffCombine':{ + const onExecuted = nodeType.prototype.onExecuted + nodeType.prototype.onExecuted = function (message) { + const prefix = 'ad_gif_preview_' + const r = onExecuted ? onExecuted.apply(this, message) : undefined + + if (this.widgets) { + const pos = this.widgets.findIndex((w) => w.name === `${prefix}_0`) + if (pos !== -1) { + for (let i = pos; i < this.widgets.length; i++) { + this.widgets[i].onRemoved?.() + } + this.widgets.length = pos + } + if (message?.gifs) { + message.gifs.forEach((params, i) => { + const previewUrl = api.apiURL( + '/view?' + new URLSearchParams(params).toString() + ) + const w = this.addCustomWidget( + CreatePreviewElement(`${prefix}_${i}`, previewUrl, params.format || 'image/gif') + ) + w.parent = this + }) + } + const onRemoved = this.onRemoved + this.onRemoved = () => { + cleanupNode(this) + return onRemoved?.() + } + } + this.setSize([this.size[0], this.computeSize([this.size[0], this.size[1]])[1]]) + return r + } + break + } + } + } +} + +app.registerExtension(gif_preview) diff --git a/custom_nodes/ComfyUI-Impact-Pack/LICENSE.txt b/custom_nodes/ComfyUI-Impact-Pack/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..f288702d2fa16d3cdf0035b15a9fcbc552cd88e7 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/LICENSE.txt @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/custom_nodes/ComfyUI-Impact-Pack/README.md b/custom_nodes/ComfyUI-Impact-Pack/README.md new file mode 100644 index 0000000000000000000000000000000000000000..cc63260ac92b35a99899daad070723cbb031073b --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/README.md @@ -0,0 +1,454 @@ +[![Youtube Badge](https://img.shields.io/badge/Youtube-FF0000?style=for-the-badge&logo=Youtube&logoColor=white&link=https://www.youtube.com/watch?v=AccoxDZIg3Y&list=PL_Ej2RDzjQLGfEeizq4GISeY3FtVyFmGP)](https://www.youtube.com/watch?v=AccoxDZIg3Y&list=PL_Ej2RDzjQLGfEeizq4GISeY3FtVyFmGP) + +# ComfyUI-Impact-Pack + +**Custom nodes pack for ComfyUI** +This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more. + + +## NOTICE +* V4.77: Compatibility patch applied. Requires ComfyUI version (Oct. 8th) or later. +* V4.73.3: ControlNetApply (SEGS) supports AnimateDiff +* V4.20.1: Due to the feature update in `RegionalSampler`, the parameter order has changed, causing malfunctions in previously created `RegionalSamplers`. Please adjust the parameters accordingly. +* V4.12: `MASKS` is changed to `MASK`. +* V4.7.2 isn't compatible with old version of `ControlNet Auxiliary Preprocessor`. If you will use `MediaPipe FaceMesh to SEGS` update to latest version(Sep. 17th). +* Selection weight syntax is changed(: -> ::) since V3.16. ([tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ImpactWildcardProcessor.md)) +* Starting from V3.6, requires latest version(Aug 8, 9ccc965) of ComfyUI. +* **In versions below V3.3.1, there was an issue with the image quality generated after using the UltralyticsDetectorProvider. Please make sure to upgrade to a newer version.** +* Starting from V3.0, nodes related to `mmdet` are optional nodes that are activated only based on the configuration settings. + - Through ComfyUI-Impact-Subpack, you can utilize UltralyticsDetectorProvider to access various detection models. +* Between versions 2.22 and 2.21, there is partial compatibility loss regarding the Detailer workflow. If you continue to use the existing workflow, errors may occur during execution. An additional output called "enhanced_alpha_list" has been added to Detailer-related nodes. +* The permission error related to cv2 that occurred during the installation of Impact Pack has been patched in version 2.21.4. However, please note that the latest versions of ComfyUI and ComfyUI-Manager are required. +* The "PreviewBridge" feature may not function correctly on ComfyUI versions released before July 1, 2023. +* Attempting to load the "ComfyUI-Impact-Pack" on ComfyUI versions released before June 27, 2023, will result in a failure. +* With the addition of wildcard support in FaceDetailer, the structure of DETAILER_PIPE-related nodes and Detailer nodes has changed. There may be malfunctions when using the existing workflow. + + +## Custom Nodes +* [Detectors](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/detectors.md) + * SAMLoader - Loads the SAM model. + * UltralyticsDetectorProvider - Loads the Ultralystics model to provide SEGM_DETECTOR, BBOX_DETECTOR. + - Unlike `MMDetDetectorProvider`, for segm models, `BBOX_DETECTOR` is also provided. + - The various models available in UltralyticsDetectorProvider can be downloaded through **ComfyUI-Manager**. + * ONNXDetectorProvider - Loads the ONNX model to provide BBOX_DETECTOR. + * CLIPSegDetectorProvider - Wrapper for CLIPSeg to provide BBOX_DETECTOR. + * You need to install the ComfyUI-CLIPSeg node extension. + * SEGM Detector (combined) - Detects segmentation and returns a mask from the input image. + * BBOX Detector (combined) - Detects bounding boxes and returns a mask from the input image. + * SAMDetector (combined) - Utilizes the SAM technology to extract the segment at the location indicated by the input SEGS on the input image and outputs it as a unified mask. + * SAMDetector (Segmented) - It is similar to `SAMDetector (combined)`, but it separates and outputs the detected segments. Multiple segments can be found for the same detected area, and currently, a policy is in place to group them arbitrarily in sets of three. This aspect is expected to be improved in the future. + * As a result, it outputs the `combined_mask`, which is a unified mask, and `batch_masks`, which are multiple masks grouped together in batch form. + * While `batch_masks` may not be completely separated, it provides functionality to perform some level of segmentation. + * Simple Detector (SEGS) - Operating primarily with `BBOX_DETECTOR`, and with the additional provision of `SAM_MODEL` or `SEGM_DETECTOR`, this node internally generates improved SEGS through mask operations on both *bbox* and *silhouette*. It serves as a convenient tool to simplify a somewhat intricate workflow. + +* ControlNet + * ControlNetApply (SEGS) - To apply ControlNet in SEGS, you need to use the Preprocessor Provider node from the Inspire Pack to utilize this node. + * `segs_preprocessor` and `control_image` can be selectively applied. If an `control_image` is given, `segs_preprocessor` will be ignored. + * If set to `control_image`, you can preview the cropped cnet image through `SEGSPreview (CNET Image)`. Images generated by `segs_preprocessor` should be verified through the `cnet_images` output of each Detailer. + * The `segs_preprocessor` operates by applying preprocessing on-the-fly based on the cropped image during the detailing process, while `control_image` will be cropped and used as input to `ControlNetApply (SEGS)`. + * ControlNetClear (SEGS) - Clear applied ControlNet in SEGS + +* Bitwise(SEGS & SEGS) - Performs a 'bitwise and' operation between two SEGS. +* Bitwise(SEGS - SEGS) - Subtracts one SEGS from another. +* Bitwise(SEGS & MASK) - Performs a bitwise AND operation between SEGS and MASK. +* Bitwise(SEGS & MASKS ForEach) - Performs a bitwise AND operation between SEGS and MASKS. + * Please note that this operation is performed with batches of MASKS, not just a single MASK. +* Bitwise(MASK & MASK) - Performs a 'bitwise and' operation between two masks. +* Bitwise(MASK - MASK) - Subtracts one mask from another. +* Bitwise(MASK + MASK) - Combine two masks. +* SEGM Detector (SEGS) - Detects segmentation and returns SEGS from the input image. +* BBOX Detector (SEGS) - Detects bounding boxes and returns SEGS from the input image. + +* Detailer + * Detailer (SEGS) - Refines the image based on SEGS. + * DetailerDebug (SEGS) - Refines the image based on SEGS. Additionally, it provides the ability to monitor the cropped image and the refined image of the cropped image. + * To prevent regeneration caused by the seed that does not change every time when using 'external_seed', please disable the 'seed random generate' option in the 'Detailer...' node. + * MASK to SEGS - Generates SEGS based on the mask. + * MASK to SEGS For AnimateDiff - Generates SEGS based on the mask for AnimateDiff. + * MediaPipe FaceMesh to SEGS - Separate each landmark from the mediapipe facemesh image to create labeled SEGS. + * Usually, the size of images created through the MediaPipe facemesh preprocessor is downscaled. It resizes the MediaPipe facemesh image to the original size given as reference_image_opt for matching sizes during processing. + * ToBinaryMask - Separates the mask generated with alpha values between 0 and 255 into 0 and 255. The non-zero parts are always set to 255. + * Masks to Mask List - This node converts the MASKS in batch form to a list of individual masks. + * Mask List to Masks - This node converts the MASK list to MASK batch form. + * EmptySEGS - Provides an empty SEGS. + * MaskPainter - Provides a feature to draw masks. + * FaceDetailer - Easily detects faces and improves them. + * FaceDetailer (pipe) - Easily detects faces and improves them (for multipass). + * MaskDetailer (pipe) - This is a simple inpaint node that applies the Detailer to the mask area. + +* `FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL)` - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. + +* SEGS Manipulation nodes + * SEGSDetailer - Performs detailed work on SEGS without pasting it back onto the original image. + * SEGSPaste - Pastes the results of SEGS onto the original image. + * If `ref_image_opt` is present, the images contained within SEGS are ignored. Instead, the image within `ref_image_opt` corresponding to the crop area of SEGS is taken and pasted. The size of the image in `ref_image_opt` should be the same as the original image size. + * This node can be used in conjunction with the processing results of AnimateDiff. + * SEGSPreview - Provides a preview of SEGS. + * This option is used to preview the improved image through `SEGSDetailer` before merging it into the original. Prior to going through ```SEGSDetailer```, SEGS only contains mask information without image information. If fallback_image_opt is connected to the original image, SEGS without image information will generate a preview using the original image. However, if SEGS already contains image information, fallback_image_opt will be ignored. + * This node can be used in conjunction with the processing results of AnimateDiff. + * SEGSPreview (CNET Image) - Show images configured with `ControlNetApply (SEGS)` for debugging purposes. + * SEGSToImageList - Convert SEGS To Image List + * SEGSToMaskList - Convert SEGS To Mask List + * SEGS Filter (label) - This node filters SEGS based on the label of the detected areas. + * SEGS Filter (ordered) - This node sorts SEGS based on size and position and retrieves SEGs within a certain range. + * SEGS Filter (range) - This node retrieves only SEGs from SEGS that have a size and position within a certain range. + * SEGS Assign (label) - Assign labels sequentially to SEGS. This node is useful when used with `[LAB]` of FaceDetailer. + * SEGSConcat - Concatenate segs1 and segs2. If source shape of segs1 and segs2 are different from segs2 will be ignored. + * Picker (SEGS) - Among the input SEGS, you can select a specific SEG through a dialog. If no SEG is selected, it outputs an empty SEGS. Increasing the batch_size of SEGSDetailer can be used for the purpose of selecting from the candidates. + * Set Default Image For SEGS - Set a default image for SEGS. SEGS with images set this way do not need to have a fallback image set. When override is set to false, the original image is preserved. + * Remove Image from SEGS - Remove the image set for the SEGS that has been configured by "Set Default Image for SEGS" or SEGSDetailer. When the image for the SEGS is removed, the Detailer node will operate based on the currently processed image instead of the SEGS. + * Make Tile SEGS - [experimental] Create SEGS in the form of tiles from an image to facilitate experiments for Tiled Upscale using the Detailer. + * The `filter_in_segs_opt` and `filter_out_segs_opt` are optional inputs. If these inputs are provided, when creating the tiles, the mask for each tile is generated by overlapping with the mask of `filter_in_segs_opt` and excluding the overlap with the mask of `filter_out_segs_opt`. Tiles with an empty mask will not be created as SEGS. + * Dilate Mask (SEGS) - Dilate/Erosion Mask in SEGS + * Gaussian Blur Mask (SEGS) - Apply Gaussian Blur to Mask in SEGS + * SEGS_ELT Manipulation - experimental nodes + * DecomposeSEGS - Decompose SEGS to allow for detailed manipulation. + * AssembleSEGS - Reassemble the decomposed SEGS. + * From SEG_ELT - Extract detailed information from SEG_ELT. + * Edit SEG_ELT - Modify some of the information in SEG_ELT. + * Dilate SEG_ELT - Dilate the mask of SEG_ELT. + +* Mask Manipulation + * Dilate Mask - Dilate Mask. + * Support erosion for negative value. + * Gaussian Blur Mask - Apply Gaussian Blur to Mask. You can utilize this for mask feathering. + +* Pipe nodes + * ToDetailerPipe, FromDetailerPipe - These nodes are used to bundle multiple inputs used in the detailer, such as models and vae, ..., into a single DETAILER_PIPE or extract the elements that are bundled in the DETAILER_PIPE. + * ToBasicPipe, FromBasicPipe - These nodes are used to bundle model, clip, vae, positive conditioning, and negative conditioning into a single BASIC_PIPE, or extract each element from the BASIC_PIPE. + * EditBasicPipe, EditDetailerPipe - These nodes are used to replace some elements in BASIC_PIPE or DETAILER_PIPE. + * FromDetailerPipe_v2, FromBasicPipe_v2 - It has the same functionality as `FromDetailerPipe` and `FromBasicPipe`, but it has an additional output that directly exports the input pipe. It is useful when editing EditBasicPipe and EditDetailerPipe. +* Latent Scale (on Pixel Space) - This node converts latent to pixel space, upscales it, and then converts it back to latent. + * If upscale_model_opt is provided, it uses the model to upscale the pixel and then downscales it using the interpolation method provided in scale_method to the target resolution. +* PixelKSampleUpscalerProvider - An upscaler is provided that converts latent to pixels using VAEDecode, performs upscaling, converts back to latent using VAEEncode, and then performs k-sampling. This upscaler can be attached to nodes such as 'Iterative Upscale' for use. + * Similar to 'Latent Scale (on Pixel Space)', if upscale_model_opt is provided, it performs pixel upscaling using the model. +* PixelTiledKSampleUpscalerProvider - It is similar to PixelKSampleUpscalerProvider, but it uses ComfyUI_TiledKSampler and Tiled VAE Decoder/Encoder to avoid GPU VRAM issues at high resolutions. + * You need to install the [BlenderNeko/ComfyUI_TiledKSampler](https://github.com/BlenderNeko/ComfyUI_TiledKSampler) node extension. + +* PK_HOOK + * DenoiseScheduleHookProvider - IterativeUpscale provides a hook that gradually changes the denoise to target_denoise as the iterative-step progresses. + * CfgScheduleHookProvider - IterativeUpscale provides a hook that gradually changes the cfg to target_cfg as the iterative-step progresses. + * StepsScheduleHookProvider - IterativeUpscale provides a hook that gradually changes the sampling-steps to target_steps as the iterative-step progresses. + * NoiseInjectionHookProvider - During each iteration of IterativeUpscale, noise is injected into the latent space while varying the strength according to a schedule. + * You need to install the [BlenderNeko/ComfyUI_Noise](https://github.com/BlenderNeko/ComfyUI_Noise) node extension. + * The seed serves as the initial value required for generating noise, and it increments by 1 with each iteration as the process unfolds. + * The source determines the types of CPU noise and GPU noise to be configured. + * Currently, there is only a simple schedule available, where the strength of the noise varies from start_strength to end_strength during the progression of each iteration. + * UnsamplerHookProvider - Apply Unsampler during each iteration. To use this node, ComfyUI_Noise must be installed. + * PixelKSampleHookCombine - This is used to connect two PK_HOOKs. hook1 is executed first and then hook2 is executed. + * If you want to simultaneously change cfg and denoise, you can combine the PK_HOOKs of CfgScheduleHookProvider and PixelKSampleHookCombine. + +* DETAILER_HOOK + * NoiseInjectionDetailerHookProvider - The `detailer_hook` is a hook in the `Detailer` that injects noise during the processing of each SEGS. + * UnsamplerDetailerHookProvider - Apply Unsampler during each cycle. To use this node, ComfyUI_Noise must be installed. + * DenoiseSchedulerDetailerHookProvider - During the progress of the cycle, the detailer's denoise is altered up to the `target_denoise`. + * CoreMLDetailerHookProvider - CoreML supports only 512x512, 512x768, 768x512, 768x768 size sampling. CoreMLDetailerHookProvider precisely fixes the upscale of the crop_region to this size. When using this hook, it will always be selected size, regardless of the guide_size. However, if the guide_size is too small, skipping will occur. + * DetailerHookCombine - This is used to connect two DETAILER_HOOKs. Similar to PixelKSampleHookCombine. + * SEGSOrderedFilterDetailerHook, SEGSRangeFilterDetailerHook, SEGSLabelFilterDetailerHook - There are a wrapper node that provides SEGSFilter nodes to be applied in FaceDetailer or Detector by creating DETAILER_HOOK. + * PreviewDetailerHOok - Connecting this hook node helps provide assistance for viewing previews whenever SEGS Detailing tasks are completed. When working with a large number of SEGS, such as Make Tile SEGS, it allows for monitoring the situation as improvements progress incrementally. + * Since this is the hook applied when pasting onto the original image, it has no effect on nodes like `SEGSDetailer`. + +* Iterative Upscale (Latent/on Pixel Space) - The upscaler takes the input upscaler and splits the scale_factor into steps, then iteratively performs upscaling. +This takes latent as input and outputs latent as the result. +* Iterative Upscale (Image) - The upscaler takes the input upscaler and splits the scale_factor into steps, then iteratively performs upscaling. This takes image as input and outputs image as the result. + * Internally, this node uses 'Iterative Upscale (Latent)'. + +* TwoSamplersForMask - This node can apply two samplers depending on the mask area. The base_sampler is applied to the area where the mask is 0, while the mask_sampler is applied to the area where the mask is 1. + * Note: The latent encoded through VAEEncodeForInpaint cannot be used. +* KSamplerProvider - This is a wrapper that enables KSampler to be used in TwoSamplersForMask TwoSamplersForMaskUpscalerProvider. +* TiledKSamplerProvider - ComfyUI_TiledKSampler is a wrapper that provides KSAMPLER. + * You need to install the [BlenderNeko/ComfyUI_TiledKSampler](https://github.com/BlenderNeko/ComfyUI_TiledKSampler) node extension. + +* TwoAdvancedSamplersForMask - TwoSamplersForMask is similar to TwoAdvancedSamplersForMask, but they differ in their operation. TwoSamplersForMask performs sampling in the mask area only after all the samples in the base area are finished. On the other hand, TwoAdvancedSamplersForMask performs sampling in both the base area and the mask area sequentially at each step. +* KSamplerAdvancedProvider - This is a wrapper that enables KSampler to be used in TwoAdvancedSamplersForMask, RegionalSampler. + * sigma_factor: By multiplying the denoise schedule by the sigma_factor, you can adjust the amount of denoising based on the configured denoise. + +* TwoSamplersForMaskUpscalerProvider - This is an Upscaler that extends TwoSamplersForMask to be used in Iterative Upscale. + * TwoSamplersForMaskUpscalerProviderPipe - pipe version of TwoSamplersForMaskUpscalerProvider. + +* Image Utils + * PreviewBridge (image) - This custom node can be used with a bridge for image when using the MaskEditor feature of Clipspace. + * PreviewBridge (latent) - This custom node can be used with a bridge for latent image when using the MaskEditor feature of Clipspace. + * If a latent with a mask is provided as input, it displays the mask. Additionally, the mask output provides the mask set in the latent. + * If a latent without a mask is provided as input, it outputs the original latent as is, but the mask output provides an output with the entire region set as a mask. + * When set mask through MaskEditor, a mask is applied to the latent, and the output includes the stored mask. The same mask is also output as the mask output. + * When connected to `vae_opt`, it takes higher priority than the `preview_method`. + * ImageSender, ImageReceiver - The images generated in ImageSender are automatically sent to the ImageReceiver with the same link_id. + * LatentSender, LatentReceiver - The latent generated in LatentSender are automatically sent to the LatentReceiver with the same link_id. + * Furthermore, LatentSender is implemented with PreviewLatent, which stores the latent in payload form within the image thumbnail. + * Due to the current structure of ComfyUI, it is unable to distinguish between SDXL latent and SD1.5/SD2.1 latent. Therefore, it generates thumbnails by decoding them using the SD1.5 method. + +* Switch nodes + * Switch (image,mask), Switch (latent), Switch (SEGS) - Among multiple inputs, it selects the input designated by the selector and outputs it. The first input must be provided, while the others are optional. However, if the input specified by the selector is not connected, an error may occur. + * Switch (Any) - This is a Switch node that takes an arbitrary number of inputs and produces a single output. Its type is determined when connected to any node, and connecting inputs increases the available slots for connections. + * Inversed Switch (Any) - In contrast to `Switch (Any)`, it takes a single input and outputs one of many. Due to ComfyUI's functional limitations, the value of `select` must be determined at the time of queuing a prompt, and while it can serve as a `Primitive Node` or `ImpactInt`, it cannot function properly when connected through other nodes. + * Guide + * When the `Switch (Any)` and `Inversed Switch (Any)` selects are transformed into primitives, it's important to be cautious because the select range is not appropriately constrained, potentially leading to unintended behavior. + * `Switch (image,mask)`, `Switch (latent)`, `Switch (SEGS)`, `Switch (Any)` supports `sel_mode` param. The `sel_mode` sets the moment at which the `select` parameter is determined. `select_on_prompt` determines the `select` at the time of queuing the prompt, while `select_on_execution` determines it during the execution of the workflow. While `select_on_execution` offers more flexibility, it can potentially trigger workflow execution errors due to running nodes that may be impossible to execute within the limitations of ComfyUI. `select_on_prompt` bypasses this constraint by treating any inputs not selected as if they were disconnected. However, please note that when using `select_on_prompt`, the `select` can only be used with widgets or `Primitive Nodes` determined at the queue prompt. + * There is an issue when connecting the built-in reroute node with the switch's input/output slots. it can lead to forced disconnections during workflow loading. Therefore, it is advisable not to use reroute for making connections in such cases. However, there are no issues when using the reroute node in Pythongossss. + +* [Wildcards](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ImpactWildcard.md) - These are nodes that supports syntax in the form of `__wildcard-name__` and dynamic prompt syntax like `{a|b|c}`. + * Wildcard files can be used by placing `.txt` or `.yaml` files under either `ComfyUI-Impact-Pack/wildcards` or `ComfyUI-Impact-Pack/custom_wildcards` paths. + * You can download and use [Wildcard YAML](https://civitai.com/models/138970/billions-of-wildcards-all-in-one) files in this format. + * After the first execution, you can change the custom wildcards path in the `custom_wildcards` entry within the `ComfyUI-Impact-Pack/impact-pack.ini` file created. + * ImpactWildcardProcessor - The text is generated by processing the wildcard in the Text. If the mode is set to "populate", a dynamic prompt is generated with each execution and the input is filled in the second textbox. If the mode is set to "fixed", the content of the second textbox remains unchanged. + * When an image is generated with the "fixed" mode, the prompt used for that particular generation is stored in the metadata. + * ImpactWildcardEncode - Similar to ImpactWildcardProcessor, this provides the loading functionality of LoRAs (e.g. ``). Populated prompts are encoded using the clip after all the lora loading is done. + * If the `Inspire Pack` is installed, you can use **Lora Block Weight** in the form of `LBW=lbw spec;` + * ``, ``, `` + +* Regional Sampling - These nodes offer the capability to divide regions and perform partial sampling using a mask. Unlike TwoSamplersForMask, sampling for each region is applied during each step. + * RegionalPrompt - This node combines a **mask** for specifying regions and the **sampler** to apply to each region to create `REGIONAL_PROMPTS`. + * CombineRegionalPrompts - Combine multiple `REGIONAL_PROMPTS` to create a single `REGIONAL_PROMPTS`. + * RegionalSampler - This node performs sampling using a base sampler and regional prompts. Sampling by the base sampler is executed at each step, while sampling for each region is performed through the sampler bound to each region. + * overlap_factor - Specifies the amount of overlap for each region to blend well with the area outside the mask. + * restore_latent - When sampling each region, restore the areas outside the mask to the base latent, preventing additional noise from being introduced outside the mask during region sampling. + * RegionalSamplerAdvanced - This is the Advanced version of the RegionalSampler. You can control it using `step` instead of `denoise`. + * NOTE: The `sde` sampler and `uni_pc` sampler introduce additional noise during each step of the sampling process. To mitigate this, when sampling each region, the `uni_pc` sampler applies additional `dpmpp_fast`, and the sde sampler applies the `dpmpp_2m` sampler as an additional measure. + +* KSampler (pipe), KSampler (advanced/pipe) + +* Image batch To Image List - Convert Image batch to Image List +- You can use images generated in a multi batch to handle them +* Make Image List - Convert multiple images into a single image list +* Make Image Batch - Convert multiple images into a single image batch +- The input of images can be scaled up as needed + +* String Selector - It selects and returns a portion of the string. When `multiline` mode is disabled, it simply returns the string of the line pointed to by the selector. When `multiline` mode is enabled, it divides the string based on lines that start with `#` and returns them. If the `select` value is larger than the number of items, it will start counting from the first line again and return accordingly. +* Combine Conditionings - It takes multiple conditionings as input and combines them into a single conditioning. +* Concat Conditionings - It takes multiple conditionings as input and concat them into a single conditioning. + +* Logics (experimental) - These nodes are experimental nodes designed to implement the logic for loops and dynamic switching. + * ImpactCompare, ImpactConditionalBranch, ImpactConditionalBranchSelMode, ImpactInt, ImpactValueSender, ImpactValueReceiver, ImpactImageInfo, ImpactMinMax, ImpactNeg, ImpactConditionalStopIteration + * ImpactIsNotEmptySEGS - This node returns `true` only if the input SEGS is not empty. + * Queue Trigger - When this node is executed, it adds a new queue to assist with repetitive tasks. It will only execute if the signal's status changes. + * Queue Trigger (Countdown) - Like the Queue Trigger, it adds a queue, but only adds it if it's greater than 1, and decrements the count by one each time it runs. + * Sleep - Waits for the specified time (in seconds). + * Set Widget Value - This node sets one of the optional inputs to the specified node's widget. An error may occur if the types do not match. + * Set Mute State - This node changes the mute state of a specific node. + * Control Bridge - This node modifies the state of the connected control nodes based on the `mode` and `behavior` . If there are nodes that require a change, the current execution is paused, the mute status is updated, and a new prompt queue is inserted. + * When the `mode` is `active`, it makes the connected control nodes active regardless of the behavior. + * When the `mode` is `Bypass/Mute`, it changes the state of the connected nodes based on whether the behavior is `Bypass` or `Mute`. + * **Limitation**: Due to these characteristics, it does not function correctly when the batch count exceeds 1. Additionally, it does not guarantee proper operation when the seed is randomized or when the state of nodes is altered by actions such as `Queue Trigger`, `Set Widget Value`, `Set Mute`, before the Control Bridge. + * When utilizing this node, please structure the workflow in such a way that `Queue Trigger`, `Set Widget Value`, `Set Mute State`, and similar actions are executed at the end of the workflow. + * If you want to change the value of the seed at each iteration, please ensure that Set Widget Value is executed at the end of the workflow instead of using randomization. + * It is not a problem if the seed changes due to randomization as long as it occurs after the Control Bridge section. + * Remote Boolean (on prompt), Remote Int (on prompt) - At the start of the prompt, this node forcibly sets the `widget_value` of `node_id`. It is disregarded if the target widget type is different. + * You can find the `node_id` by checking through [ComfyUI-Manager](https://github.com/ltdrdata/ComfyUI-Manager) using the format `Badge: #ID Nickname`. + * Experimental set of nodes for implementing loop functionality (tutorial to be prepared later / [example workflow](test/loop-test.json)). + +* HuggingFace - These nodes provide functionalities based on HuggingFace repository models. + * `HF Transformers Classifier Provider` - This is a node that provides a classifier based on HuggingFace's transformers models. + * The 'repo id' parameter should contain HuggingFace's repo id. When `preset_repo_id` is set to `Manual repo id`, use the manually entered repo id in `manual_repo_id`. + * e.g. 'rizvandwiki/gender-classification-2' is a repository that provides a model for gender classification. + * `SEGS Classify` - This node utilizes the `TRANSFORMERS_CLASSIFIER` loaded with 'HF Transformers Classifier Provider' to classify `SEGS`. + * The 'expr' allows for forms like `label > number`, and in the case of `preset_expr` being `Manual expr`, it uses the expression entered in `manual_expr`. + * For example, in the case of `male <= 0.4`, if the score of the `male` label in the classification result is less than or equal to 0.4, it is categorized as `filtered_SEGS`, otherwise, it is categorized as `remained_SEGS`. + * For supported labels, please refer to the `config.json` of the respective HuggingFace repository. + * `#Female` and `#Male` are symbols that group multiple labels such as `Female, women, woman, ...`, for convenience, rather than being single labels. + +## MMDet nodes +* MMDetDetectorProvider - Loads the MMDet model to provide BBOX_DETECTOR and SEGM_DETECTOR. +* To use the existing MMDetDetectorProvider, you need to enable the MMDet usage configuration. + + +## Feature +* Interactive SAM Detector (Clipspace) - When you right-click on a node that has 'MASK' and 'IMAGE' outputs, a context menu will open. From this menu, you can either open a dialog to create a SAM Mask using 'Open in SAM Detector', or copy the content (likely mask data) using 'Copy (Clipspace)' and generate a mask using 'Impact SAM Detector' from the clipspace menu, and then paste it using 'Paste (Clipspace)'. +* Providing a feature to detect errors that occur when mixing models and clips from checkpoints such as `SDXL Base`, `SDXL Refiner`, `SD1.x`, `SD2.x` during sample execution, and reporting appropriate errors. + +## Deprecated +* The following nodes have been kept only for compatibility with existing workflows, and are no longer supported. Please replace them with new nodes. + * ONNX Detector (SEGS) - BBOX Detector (SEGS) + * MMDetLoader -> MMDetDetectorProvider + * SegsMaskCombine -> SEGS to MASK (combined) + * BboxDetectorForEach -> BBOX Detector (SEGS) + * SegmDetectorForEach -> SEGM Detector (SEGS) + * BboxDetectorCombined -> BBOX Detector (combined) + * SegmDetectorCombined -> SEGM Detector (combined) + * MaskPainter -> PreviewBridge +* To use the existing deprecated legacy nodes, you need to enable the MMDet usage configuration. + + +## Ultralytics models +* huggingface.co/Bingsu/[adetailer](https://github.com/ultralytics/assets/releases/) - You can download face, people detection models, and clothing detection models. +* ultralytics/[assets](https://github.com/ultralytics/assets/releases/) - You can download various types of detection models other than faces or people. +* civitai/[adetailer](https://civitai.com/search/models?sortBy=models_v5&query=adetailer) - You can download various types detection models....Many models are associated with NSFW content. + +## How to activate 'MMDet usage' +* Upon the initial execution, an `impact-pack.ini` file will be generated in the custom_nodes/ComfyUI-Impact-Pack directory. +``` +[default] +dependency_version = 2 +mmdet_skip = True +``` +* Change `mmdet_skip = True` to `mmdet_skip = False` +``` +[default] +dependency_version = 2 +mmdet_skip = False +``` +* Restart ComfyUI + + +## Installation + +1. `cd custom_nodes` +1. `git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack.git` +3. `cd ComfyUI-Impact-Pack` +4. (optional) `git submodule update --init --recursive` + * Impact Pack will automatically download subpack during its initial launch. +5. (optional) `python install.py` + * Impact Pack will automatically install its dependencies during its initial launch. + * For the portable version, you should execute the command `..\..\..\python_embeded\python.exe install.py` to run the installation script. + + +6. Restart ComfyUI + +* NOTE: If an error occurs during the installation process, please refer to [Troubleshooting Page](troubleshooting/TROUBLESHOOTING.md) for assistance. +* You can use this colab notebook [colab notebook](https://colab.research.google.com/github/ltdrdata/ComfyUI-Impact-Pack/blob/Main/notebook/comfyui_colab_impact_pack.ipynb) to launch it. This notebook automatically downloads the impact pack to the custom_nodes directory, installs the tested dependencies, and runs it. + +## Package Dependencies (If you need to manual setup.) + +* pip install + * openmim + * segment-anything + * ultralytics + * scikit-image + * piexif + * (optional) pycocotools + * (optional) onnxruntime + +* mim install (optional) + * mmcv==2.0.0, mmdet==3.0.0, mmengine==0.7.2 + +* linux packages (ubuntu) + * libgl1-mesa-glx + * libglib2.0-0 + + +## Config example +* Once you run the Impact Pack for the first time, an `impact-pack.ini` file will be automatically generated in the Impact Pack directory. You can modify this configuration file to customize the default behavior. + * `dependency_version` - don't touch this + * `mmdet_skip` - disable MMDet based nodes and legacy nodes if `True` + * `sam_editor_cpu` - use cpu for `SAM editor` instead of gpu + * sam_editor_model: Specify the SAM model for the SAM editor. + * You can download various SAM models using ComfyUI-Manager. + * Path to SAM model: `ComfyUI/models/sams` +``` +[default] +dependency_version = 9 +mmdet_skip = True +sam_editor_cpu = False +sam_editor_model = sam_vit_b_01ec64.pth +``` + + +## Other Materials (auto-download on initial startup) + +* ComfyUI/models/mmdets/bbox <= https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth +* ComfyUI/models/mmdets/bbox <= https://raw.githubusercontent.com/Bing-su/dddetailer/master/config/mmdet_anime-face_yolov3.py +* ComfyUI/models/sams <= https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth + +## Troubleshooting page +* [Troubleshooting Page](troubleshooting/TROUBLESHOOTING.md) + + +## How to use (DDetailer feature) + +#### 1. Basic auto face detection and refine exapmle. +![simple](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/simple.png) +* The face that has been damaged due to low resolution is restored with high resolution by generating and synthesizing it, in order to restore the details. +* The FaceDetailer node is a combination of a Detector node for face detection and a Detailer node for image enhancement. See the [Advanced Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/tutorial/advanced.md) for a more detailed explanation. +* Pass the MMDetLoader 's bbox model and the detection model loaded by SAMLoader to FaceDetailer . Since it performs the function of KSampler for image enhancement, it overlaps with KSampler's options. +* The MASK output of FaceDetailer provides a visualization of where the detected and enhanced areas are. + +![simple-orig](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/simple-original.png) ![simple-refined](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/simple-refined.png) +* You can see that the face in the image on the left has increased detail as in the image on the right. + +#### 2. 2Pass refine (restore a severely damaged face) +![2pass-workflow-example](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-simple.png) +* Although two FaceDetailers can be attached together for a 2-pass configuration, various common inputs used in KSampler can be passed through DETAILER_PIPE, so FaceDetailerPipe can be used to configure easily. +* In 1pass, only rough outline recovery is required, so restore with a reasonable resolution and low options. However, if you increase the dilation at this time, not only the face but also the surrounding parts are included in the recovery range, so it is useful when you need to reshape the face other than the facial part. + +![2pass-example-original](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-original.png) ![2pass-example-middle](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-1pass.png) ![2pass-example-result](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/2pass-2pass.png) +* In the first stage, the severely damaged face is restored to some extent, and in the second stage, the details are restored + +#### 3. Face Bbox(bounding box) + Person silhouette segmentation (prevent distortion of the background.) +![combination-workflow-example](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/combination.jpg) +![combination-example-original](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/combination-original.png) ![combination-example-refined](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/combination-refined.png) + +* Facial synthesis that emphasizes details is delicately aligned with the contours of the face, and it can be observed that it does not affect the image outside of the face. + +* The BBoxDetectorForEach node is used to detect faces, and the SAMDetectorCombined node is used to find the segment related to the detected face. By using the Segs & Mask node with the two masks obtained in this way, an accurate mask that intersects based on segs can be generated. If this generated mask is input to the DetailerForEach node, only the target area can be created in high resolution from the image and then composited. + +#### 4. Iterative Upscale +![upscale-workflow-example](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/upscale-workflow.png) + +* The IterativeUpscale node is a node that enlarges an image/latent by a scale_factor. In this process, the upscale is carried out progressively by dividing it into steps. +* IterativeUpscale takes an Upscaler as an input, similar to a plugin, and uses it during each iteration. PixelKSampleUpscalerProvider is an Upscaler that converts the latent representation to pixel space and applies ksampling. + * The upscale_model_opt is an optional parameter that determines whether to use the upscale function of the model base if available. Using the upscale function of the model base can significantly reduce the number of iterative steps required. If an x2 upscaler is used, the image/latent is first upscaled by a factor of 2 and then downscaled to the target scale at each step before further processing is done. + +* The following image is an image of 304x512 pixels and the same image scaled up to three times its original size using IterativeUpscale. + +![combination-example-original](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/upscale-original.png) ![combination-example-refined](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/upscale-3x.png) + + +#### 5. Interactive SAM Detector (Clipspace) + +* When you right-click on the node that outputs 'MASK' and 'IMAGE', a menu called "Open in SAM Detector" appears, as shown in the following picture. Clicking on the menu opens a dialog in SAM's functionality, allowing you to generate a segment mask. +![samdetector-menu](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/SAMDetector-menu.png) + +* By clicking the left mouse button on a coordinate, a positive prompt in blue color is entered, indicating the area that should be included. Clicking the right mouse button on a coordinate enters a negative prompt in red color, indicating the area that should be excluded. Positive prompts represent the areas that should be included, while negative prompts represent the areas that should be excluded. +* You can remove the points that were added by using the "undo" button. After selecting the points, pressing the "detect" button generates the mask. Additionally, you can adjust the fidelity slider to determine the extent to which the mask belongs to the confidence region. + +![samdetector-dialog](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/SAMDetector-dialog.jpg) + +* If you opened the dialog through "Open in SAM Detector" from the node, you can directly apply the changes by clicking the "Save to node" button. However, if you opened the dialog through the "clipspace" menu, you can save it to clipspace by clicking the "Save" button. + +![samdetector-result](https://github.com/ltdrdata/ComfyUI-extension-tutorials/raw/Main/ComfyUI-Impact-Pack/images/SAMDetector-result.jpg) + +* When you execute using the reflected mask in the node, you can observe that the image and mask are displayed separately. + + +## Others Tutorials +* [ComfyUI-extension-tutorials/ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-extension-tutorials/tree/Main/ComfyUI-Impact-Pack) - You can find various tutorials and workflows on this page. +* [Advanced Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/advanced.md) +* [SAM Application](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/sam.md) +* [PreviewBridge](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/previewbridge.md) +* [Mask Pointer](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/maskpointer.md) +* [ONNX Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ONNX.md) +* [CLIPSeg Tutorial](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/clipseg.md) +* [Extreme Highresolution Upscale](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/extreme-upscale.md) +* [TwoSamplersForMask](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/TwoSamplers.md) +* [TwoAdvancedSamplersForMask](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/TwoAdvancedSamplers.md) +* [Advanced Iterative Upscale: PK_HOOK](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/pk_hook.md) +* [Advanced Iterative Upscale: TwoSamplersForMask Upscale Provider](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/TwoSamplersUpscale.md) +* [Interactive SAM + PreviewBridge](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/sam_with_preview_bridge.md) +* [ImageSender/ImageReceiver/LatentSender/LatentReceiver](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/sender_receiver.md) +* [ImpactWildcardProcessor](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ImpactWildcardProcessor.md) + + +## Credits + +ComfyUI/[ComfyUI](https://github.com/comfyanonymous/ComfyUI) - A powerful and modular stable diffusion GUI. + +dustysys/[ddetailer](https://github.com/dustysys/ddetailer) - DDetailer for Stable-diffusion-webUI extension. + +Bing-su/[dddetailer](https://github.com/Bing-su/dddetailer) - The anime-face-detector used in ddetailer has been updated to be compatible with mmdet 3.0.0, and we have also applied a patch to the pycocotools dependency for Windows environment in ddetailer. + +facebook/[segment-anything](https://github.com/facebookresearch/segment-anything) - Segmentation Anything! + +hysts/[anime-face-detector](https://github.com/hysts/anime-face-detector) - Creator of `anime-face_yolov3`, which has impressive performance on a variety of art styles. + +open-mmlab/[mmdetection](https://github.com/open-mmlab/mmdetection) - Object detection toolset. `dd-person_mask2former` was trained via transfer learning using their [R-50 Mask2Former instance segmentation model](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask2former#instance-segmentation) as a base. + +biegert/[ComfyUI-CLIPSeg](https://github.com/biegert/ComfyUI-CLIPSeg) - This is a custom node that enables the use of CLIPSeg technology, which can find segments through prompts, in ComfyUI. + +BlenderNeok/[ComfyUI-TiledKSampler](https://github.com/BlenderNeko/ComfyUI_TiledKSampler) - +The tile sampler allows high-resolution sampling even in places with low GPU VRAM. + +BlenderNeok/[ComfyUI_Noise](https://github.com/BlenderNeko/ComfyUI_Noise) - The noise injection feature relies on this function. + +WASasquatch/[was-node-suite-comfyui](https://github.com/WASasquatch/was-node-suite-comfyui) - A powerful custom node extensions of ComfyUI. + +Trung0246/[ComfyUI-0246](https://github.com/Trung0246/ComfyUI-0246) - Nice bypass hack! diff --git a/custom_nodes/ComfyUI-Impact-Pack/__init__.py b/custom_nodes/ComfyUI-Impact-Pack/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..c4b3b7e78868d486240610de15f9e187132db518 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/__init__.py @@ -0,0 +1,502 @@ +""" +@author: Dr.Lt.Data +@title: Impact Pack +@nickname: Impact Pack +@description: This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler. +""" + +import shutil +import folder_paths +import os +import sys +import traceback + +comfy_path = os.path.dirname(folder_paths.__file__) +impact_path = os.path.join(os.path.dirname(__file__)) +subpack_path = os.path.join(os.path.dirname(__file__), "impact_subpack") +modules_path = os.path.join(os.path.dirname(__file__), "modules") +wildcards_path = os.path.join(os.path.dirname(__file__), "wildcards") +custom_wildcards_path = os.path.join(os.path.dirname(__file__), "custom_wildcards") + +sys.path.append(modules_path) + +import impact.config +import impact.sample_error_enhancer +print(f"### Loading: ComfyUI-Impact-Pack ({impact.config.version})") + + +def do_install(): + import importlib + spec = importlib.util.spec_from_file_location('impact_install', os.path.join(os.path.dirname(__file__), 'install.py')) + impact_install = importlib.util.module_from_spec(spec) + spec.loader.exec_module(impact_install) + + +# ensure dependency +if not os.path.exists(os.path.join(subpack_path, ".git")) and os.path.exists(subpack_path): + print(f"### CompfyUI-Impact-Pack: corrupted subpack detected.") + shutil.rmtree(subpack_path) + +if impact.config.get_config()['dependency_version'] < impact.config.dependency_version or not os.path.exists(subpack_path): + print(f"### ComfyUI-Impact-Pack: Updating dependencies [{impact.config.get_config()['dependency_version']} -> {impact.config.dependency_version}]") + do_install() + +sys.path.append(subpack_path) + +# Core +# recheck dependencies for colab +try: + import impact.subpack_nodes # This import must be done before cv2. + + import folder_paths + import torch + import cv2 + import numpy as np + import comfy.samplers + import comfy.sd + import warnings + from PIL import Image, ImageFilter + from skimage.measure import label, regionprops + from collections import namedtuple + import piexif + + if not impact.config.get_config()['mmdet_skip']: + import mmcv + from mmdet.apis import (inference_detector, init_detector) + from mmdet.evaluation import get_classes +except: + import importlib + print("### ComfyUI-Impact-Pack: Reinstall dependencies (several dependencies are missing.)") + do_install() + +import impact.impact_server # to load server api + +def setup_js(): + import nodes + js_dest_path = os.path.join(comfy_path, "web", "extensions", "impact-pack") + + if hasattr(nodes, "EXTENSION_WEB_DIRS"): + if os.path.exists(js_dest_path): + shutil.rmtree(js_dest_path) + else: + print(f"[WARN] ComfyUI-Impact-Pack: Your ComfyUI version is outdated. Please update to the latest version.") + # setup js + if not os.path.exists(js_dest_path): + os.makedirs(js_dest_path) + + js_src_path = os.path.join(impact_path, "js", "impact-pack.js") + shutil.copy(js_src_path, js_dest_path) + + js_src_path = os.path.join(impact_path, "js", "impact-sam-editor.js") + shutil.copy(js_src_path, js_dest_path) + + js_src_path = os.path.join(impact_path, "js", "comboBoolMigration.js") + shutil.copy(js_src_path, js_dest_path) + + +setup_js() + +from impact.impact_pack import * +from impact.detectors import * +from impact.pipe import * +from impact.logics import * +from impact.util_nodes import * +from impact.segs_nodes import * +from impact.special_samplers import * +from impact.hf_nodes import * +from impact.bridge_nodes import * +from impact.hook_nodes import * +from impact.animatediff_nodes import * + +import threading + +wildcard_path = impact.config.get_config()['custom_wildcards'] + + +def wildcard_load(): + with wildcards.wildcard_lock: + impact.wildcards.read_wildcard_dict(wildcards_path) + + try: + impact.wildcards.read_wildcard_dict(impact.config.get_config()['custom_wildcards']) + except Exception as e: + print(f"[Impact Pack] Failed to load custom wildcards directory.") + + print(f"[Impact Pack] Wildcards loading done.") + + +threading.Thread(target=wildcard_load).start() + + +NODE_CLASS_MAPPINGS = { + "SAMLoader": SAMLoader, + "CLIPSegDetectorProvider": CLIPSegDetectorProvider, + "ONNXDetectorProvider": ONNXDetectorProvider, + + "BitwiseAndMaskForEach": BitwiseAndMaskForEach, + "SubtractMaskForEach": SubtractMaskForEach, + + "DetailerForEach": DetailerForEach, + "DetailerForEachDebug": DetailerForEachTest, + "DetailerForEachPipe": DetailerForEachPipe, + "DetailerForEachDebugPipe": DetailerForEachTestPipe, + "DetailerForEachPipeForAnimateDiff": DetailerForEachPipeForAnimateDiff, + + "SAMDetectorCombined": SAMDetectorCombined, + "SAMDetectorSegmented": SAMDetectorSegmented, + + "FaceDetailer": FaceDetailer, + "FaceDetailerPipe": FaceDetailerPipe, + "MaskDetailerPipe": MaskDetailerPipe, + + "ToDetailerPipe": ToDetailerPipe, + "ToDetailerPipeSDXL": ToDetailerPipeSDXL, + "FromDetailerPipe": FromDetailerPipe, + "FromDetailerPipe_v2": FromDetailerPipe_v2, + "FromDetailerPipeSDXL": FromDetailerPipe_SDXL, + "ToBasicPipe": ToBasicPipe, + "FromBasicPipe": FromBasicPipe, + "FromBasicPipe_v2": FromBasicPipe_v2, + "BasicPipeToDetailerPipe": BasicPipeToDetailerPipe, + "BasicPipeToDetailerPipeSDXL": BasicPipeToDetailerPipeSDXL, + "DetailerPipeToBasicPipe": DetailerPipeToBasicPipe, + "EditBasicPipe": EditBasicPipe, + "EditDetailerPipe": EditDetailerPipe, + "EditDetailerPipeSDXL": EditDetailerPipeSDXL, + + "LatentPixelScale": LatentPixelScale, + "PixelKSampleUpscalerProvider": PixelKSampleUpscalerProvider, + "PixelKSampleUpscalerProviderPipe": PixelKSampleUpscalerProviderPipe, + "IterativeLatentUpscale": IterativeLatentUpscale, + "IterativeImageUpscale": IterativeImageUpscale, + "PixelTiledKSampleUpscalerProvider": PixelTiledKSampleUpscalerProvider, + "PixelTiledKSampleUpscalerProviderPipe": PixelTiledKSampleUpscalerProviderPipe, + "TwoSamplersForMaskUpscalerProvider": TwoSamplersForMaskUpscalerProvider, + "TwoSamplersForMaskUpscalerProviderPipe": TwoSamplersForMaskUpscalerProviderPipe, + + "PixelKSampleHookCombine": PixelKSampleHookCombine, + "DenoiseScheduleHookProvider": DenoiseScheduleHookProvider, + "StepsScheduleHookProvider": StepsScheduleHookProvider, + "CfgScheduleHookProvider": CfgScheduleHookProvider, + "NoiseInjectionHookProvider": NoiseInjectionHookProvider, + "UnsamplerHookProvider": UnsamplerHookProvider, + "CoreMLDetailerHookProvider": CoreMLDetailerHookProvider, + "PreviewDetailerHookProvider": PreviewDetailerHookProvider, + + "DetailerHookCombine": DetailerHookCombine, + "NoiseInjectionDetailerHookProvider": NoiseInjectionDetailerHookProvider, + "UnsamplerDetailerHookProvider": UnsamplerDetailerHookProvider, + "DenoiseSchedulerDetailerHookProvider": DenoiseSchedulerDetailerHookProvider, + "SEGSOrderedFilterDetailerHookProvider": SEGSOrderedFilterDetailerHookProvider, + "SEGSRangeFilterDetailerHookProvider": SEGSRangeFilterDetailerHookProvider, + "SEGSLabelFilterDetailerHookProvider": SEGSLabelFilterDetailerHookProvider, + + "BitwiseAndMask": BitwiseAndMask, + "SubtractMask": SubtractMask, + "AddMask": AddMask, + "ImpactSegsAndMask": SegsBitwiseAndMask, + "ImpactSegsAndMaskForEach": SegsBitwiseAndMaskForEach, + "EmptySegs": EmptySEGS, + + "MediaPipeFaceMeshToSEGS": MediaPipeFaceMeshToSEGS, + "MaskToSEGS": MaskToSEGS, + "MaskToSEGS_for_AnimateDiff": MaskToSEGS_for_AnimateDiff, + "ToBinaryMask": ToBinaryMask, + "MasksToMaskList": MasksToMaskList, + "MaskListToMaskBatch": MaskListToMaskBatch, + "ImageListToImageBatch": ImageListToImageBatch, + "SetDefaultImageForSEGS": DefaultImageForSEGS, + "RemoveImageFromSEGS": RemoveImageFromSEGS, + + "BboxDetectorSEGS": BboxDetectorForEach, + "SegmDetectorSEGS": SegmDetectorForEach, + "ONNXDetectorSEGS": BboxDetectorForEach, + "ImpactSimpleDetectorSEGS_for_AD": SimpleDetectorForAnimateDiff, + "ImpactSimpleDetectorSEGS": SimpleDetectorForEach, + "ImpactSimpleDetectorSEGSPipe": SimpleDetectorForEachPipe, + "ImpactControlNetApplySEGS": ControlNetApplySEGS, + "ImpactControlNetApplyAdvancedSEGS": ControlNetApplyAdvancedSEGS, + "ImpactControlNetClearSEGS": ControlNetClearSEGS, + + "ImpactDecomposeSEGS": DecomposeSEGS, + "ImpactAssembleSEGS": AssembleSEGS, + "ImpactFrom_SEG_ELT": From_SEG_ELT, + "ImpactEdit_SEG_ELT": Edit_SEG_ELT, + "ImpactDilate_Mask_SEG_ELT": Dilate_SEG_ELT, + "ImpactDilateMask": DilateMask, + "ImpactGaussianBlurMask": GaussianBlurMask, + "ImpactDilateMaskInSEGS": DilateMaskInSEGS, + "ImpactGaussianBlurMaskInSEGS": GaussianBlurMaskInSEGS, + "ImpactScaleBy_BBOX_SEG_ELT": SEG_ELT_BBOX_ScaleBy, + + "BboxDetectorCombined_v2": BboxDetectorCombined, + "SegmDetectorCombined_v2": SegmDetectorCombined, + "SegsToCombinedMask": SegsToCombinedMask, + + "KSamplerProvider": KSamplerProvider, + "TwoSamplersForMask": TwoSamplersForMask, + "TiledKSamplerProvider": TiledKSamplerProvider, + + "KSamplerAdvancedProvider": KSamplerAdvancedProvider, + "TwoAdvancedSamplersForMask": TwoAdvancedSamplersForMask, + + "PreviewBridge": PreviewBridge, + "PreviewBridgeLatent": PreviewBridgeLatent, + "ImageSender": ImageSender, + "ImageReceiver": ImageReceiver, + "LatentSender": LatentSender, + "LatentReceiver": LatentReceiver, + "ImageMaskSwitch": ImageMaskSwitch, + "LatentSwitch": GeneralSwitch, + "SEGSSwitch": GeneralSwitch, + "ImpactSwitch": GeneralSwitch, + "ImpactInversedSwitch": GeneralInversedSwitch, + + "ImpactWildcardProcessor": ImpactWildcardProcessor, + "ImpactWildcardEncode": ImpactWildcardEncode, + + "SEGSDetailer": SEGSDetailer, + "SEGSPaste": SEGSPaste, + "SEGSPreview": SEGSPreview, + "SEGSPreviewCNet": SEGSPreviewCNet, + "SEGSToImageList": SEGSToImageList, + "ImpactSEGSToMaskList": SEGSToMaskList, + "ImpactSEGSToMaskBatch": SEGSToMaskBatch, + "ImpactSEGSConcat": SEGSConcat, + "ImpactSEGSPicker": SEGSPicker, + "ImpactMakeTileSEGS": MakeTileSEGS, + + "SEGSDetailerForAnimateDiff": SEGSDetailerForAnimateDiff, + + "ImpactKSamplerBasicPipe": KSamplerBasicPipe, + "ImpactKSamplerAdvancedBasicPipe": KSamplerAdvancedBasicPipe, + + "ReencodeLatent": ReencodeLatent, + "ReencodeLatentPipe": ReencodeLatentPipe, + + "ImpactImageBatchToImageList": ImageBatchToImageList, + "ImpactMakeImageList": MakeImageList, + "ImpactMakeImageBatch": MakeImageBatch, + + "RegionalSampler": RegionalSampler, + "RegionalSamplerAdvanced": RegionalSamplerAdvanced, + "CombineRegionalPrompts": CombineRegionalPrompts, + "RegionalPrompt": RegionalPrompt, + + "ImpactCombineConditionings": CombineConditionings, + "ImpactConcatConditionings": ConcatConditionings, + + "ImpactSEGSLabelAssign": SEGSLabelAssign, + "ImpactSEGSLabelFilter": SEGSLabelFilter, + "ImpactSEGSRangeFilter": SEGSRangeFilter, + "ImpactSEGSOrderedFilter": SEGSOrderedFilter, + + "ImpactCompare": ImpactCompare, + "ImpactConditionalBranch": ImpactConditionalBranch, + "ImpactConditionalBranchSelMode": ImpactConditionalBranchSelMode, + "ImpactIfNone": ImpactIfNone, + "ImpactConvertDataType": ImpactConvertDataType, + "ImpactLogicalOperators": ImpactLogicalOperators, + "ImpactInt": ImpactInt, + "ImpactFloat": ImpactFloat, + "ImpactValueSender": ImpactValueSender, + "ImpactValueReceiver": ImpactValueReceiver, + "ImpactImageInfo": ImpactImageInfo, + "ImpactLatentInfo": ImpactLatentInfo, + "ImpactMinMax": ImpactMinMax, + "ImpactNeg": ImpactNeg, + "ImpactConditionalStopIteration": ImpactConditionalStopIteration, + "ImpactStringSelector": StringSelector, + + "RemoveNoiseMask": RemoveNoiseMask, + + "ImpactLogger": ImpactLogger, + "ImpactDummyInput": ImpactDummyInput, + + "ImpactQueueTrigger": ImpactQueueTrigger, + "ImpactQueueTriggerCountdown": ImpactQueueTriggerCountdown, + "ImpactSetWidgetValue": ImpactSetWidgetValue, + "ImpactNodeSetMuteState": ImpactNodeSetMuteState, + "ImpactControlBridge": ImpactControlBridge, + "ImpactIsNotEmptySEGS": ImpactNotEmptySEGS, + "ImpactSleep": ImpactSleep, + "ImpactRemoteBoolean": ImpactRemoteBoolean, + "ImpactRemoteInt": ImpactRemoteInt, + + "ImpactHFTransformersClassifierProvider": HF_TransformersClassifierProvider, + "ImpactSEGSClassify": SEGS_Classify +} + + +NODE_DISPLAY_NAME_MAPPINGS = { + "SAMLoader": "SAMLoader (Impact)", + + "BboxDetectorSEGS": "BBOX Detector (SEGS)", + "SegmDetectorSEGS": "SEGM Detector (SEGS)", + "ONNXDetectorSEGS": "ONNX Detector (SEGS/legacy) - use BBOXDetector", + "ImpactSimpleDetectorSEGS_for_AD": "Simple Detector for AnimateDiff (SEGS)", + "ImpactSimpleDetectorSEGS": "Simple Detector (SEGS)", + "ImpactSimpleDetectorSEGSPipe": "Simple Detector (SEGS/pipe)", + "ImpactControlNetApplySEGS": "ControlNetApply (SEGS)", + "ImpactControlNetApplyAdvancedSEGS": "ControlNetApplyAdvanced (SEGS)", + + "BboxDetectorCombined_v2": "BBOX Detector (combined)", + "SegmDetectorCombined_v2": "SEGM Detector (combined)", + "SegsToCombinedMask": "SEGS to MASK (combined)", + "MediaPipeFaceMeshToSEGS": "MediaPipe FaceMesh to SEGS", + "MaskToSEGS": "MASK to SEGS", + "MaskToSEGS_for_AnimateDiff": "MASK to SEGS for AnimateDiff", + "BitwiseAndMaskForEach": "Bitwise(SEGS & SEGS)", + "SubtractMaskForEach": "Bitwise(SEGS - SEGS)", + "ImpactSegsAndMask": "Bitwise(SEGS & MASK)", + "ImpactSegsAndMaskForEach": "Bitwise(SEGS & MASKS ForEach)", + "BitwiseAndMask": "Bitwise(MASK & MASK)", + "SubtractMask": "Bitwise(MASK - MASK)", + "AddMask": "Bitwise(MASK + MASK)", + "DetailerForEach": "Detailer (SEGS)", + "DetailerForEachPipe": "Detailer (SEGS/pipe)", + "DetailerForEachDebug": "DetailerDebug (SEGS)", + "DetailerForEachDebugPipe": "DetailerDebug (SEGS/pipe)", + "SEGSDetailerForAnimateDiff": "SEGSDetailer For AnimateDiff (SEGS/pipe)", + "DetailerForEachPipeForAnimateDiff": "Detailer For AnimateDiff (SEGS/pipe)", + + "SAMDetectorCombined": "SAMDetector (combined)", + "SAMDetectorSegmented": "SAMDetector (segmented)", + "FaceDetailerPipe": "FaceDetailer (pipe)", + "MaskDetailerPipe": "MaskDetailer (pipe)", + + "FromDetailerPipeSDXL": "FromDetailer (SDXL/pipe)", + "BasicPipeToDetailerPipeSDXL": "BasicPipe -> DetailerPipe (SDXL)", + "EditDetailerPipeSDXL": "Edit DetailerPipe (SDXL)", + + "BasicPipeToDetailerPipe": "BasicPipe -> DetailerPipe", + "DetailerPipeToBasicPipe": "DetailerPipe -> BasicPipe", + "EditBasicPipe": "Edit BasicPipe", + "EditDetailerPipe": "Edit DetailerPipe", + + "LatentPixelScale": "Latent Scale (on Pixel Space)", + "IterativeLatentUpscale": "Iterative Upscale (Latent/on Pixel Space)", + "IterativeImageUpscale": "Iterative Upscale (Image)", + + "TwoSamplersForMaskUpscalerProvider": "TwoSamplersForMask Upscaler Provider", + "TwoSamplersForMaskUpscalerProviderPipe": "TwoSamplersForMask Upscaler Provider (pipe)", + + "ReencodeLatent": "Reencode Latent", + "ReencodeLatentPipe": "Reencode Latent (pipe)", + + "ImpactKSamplerBasicPipe": "KSampler (pipe)", + "ImpactKSamplerAdvancedBasicPipe": "KSampler (Advanced/pipe)", + "ImpactSEGSLabelAssign": "SEGS Assign (label)", + "ImpactSEGSLabelFilter": "SEGS Filter (label)", + "ImpactSEGSRangeFilter": "SEGS Filter (range)", + "ImpactSEGSOrderedFilter": "SEGS Filter (ordered)", + "ImpactSEGSConcat": "SEGS Concat", + "ImpactSEGSToMaskList": "SEGS to Mask List", + "ImpactSEGSToMaskBatch": "SEGS to Mask Batch", + "ImpactSEGSPicker": "Picker (SEGS)", + "ImpactMakeTileSEGS": "Make Tile SEGS", + + "ImpactDecomposeSEGS": "Decompose (SEGS)", + "ImpactAssembleSEGS": "Assemble (SEGS)", + "ImpactFrom_SEG_ELT": "From SEG_ELT", + "ImpactEdit_SEG_ELT": "Edit SEG_ELT", + "ImpactDilate_Mask_SEG_ELT": "Dilate Mask (SEG_ELT)", + "ImpactScaleBy_BBOX_SEG_ELT": "ScaleBy BBOX (SEG_ELT)", + "ImpactDilateMask": "Dilate Mask", + "ImpactGaussianBlurMask": "Gaussian Blur Mask", + "ImpactDilateMaskInSEGS": "Dilate Mask (SEGS)", + "ImpactGaussianBlurMaskInSEGS": "Gaussian Blur Mask (SEGS)", + + "PreviewBridge": "Preview Bridge (Image)", + "PreviewBridgeLatent": "Preview Bridge (Latent)", + "ImageSender": "Image Sender", + "ImageReceiver": "Image Receiver", + "ImageMaskSwitch": "Switch (images, mask)", + "ImpactSwitch": "Switch (Any)", + "ImpactInversedSwitch": "Inversed Switch (Any)", + + "MasksToMaskList": "Masks to Mask List", + "MaskListToMaskBatch": "Mask List to Masks", + "ImpactImageBatchToImageList": "Image batch to Image List", + "ImageListToImageBatch": "Image List to Image Batch", + "ImpactMakeImageList": "Make Image List", + "ImpactMakeImageBatch": "Make Image Batch", + "ImpactStringSelector": "String Selector", + "ImpactIsNotEmptySEGS": "SEGS isn't Empty", + "SetDefaultImageForSEGS": "Set Default Image for SEGS", + "RemoveImageFromSEGS": "Remove Image from SEGS", + + "RemoveNoiseMask": "Remove Noise Mask", + + "ImpactCombineConditionings": "Combine Conditionings", + "ImpactConcatConditionings": "Concat Conditionings", + + "ImpactQueueTrigger": "Queue Trigger", + "ImpactQueueTriggerCountdown": "Queue Trigger (Countdown)", + "ImpactSetWidgetValue": "Set Widget Value", + "ImpactNodeSetMuteState": "Set Mute State", + "ImpactControlBridge": "Control Bridge", + "ImpactSleep": "Sleep", + "ImpactRemoteBoolean": "Remote Boolean (on prompt)", + "ImpactRemoteInt": "Remote Int (on prompt)", + + "ImpactHFTransformersClassifierProvider": "HF Transformers Classifier Provider", + "ImpactSEGSClassify": "SEGS Classify", + + "LatentSwitch": "Switch (latent/legacy)", + "SEGSSwitch": "Switch (SEGS/legacy)", + + "SEGSPreviewCNet": "SEGSPreview (CNET Image)" +} + +if not impact.config.get_config()['mmdet_skip']: + from impact.mmdet_nodes import * + import impact.legacy_nodes + NODE_CLASS_MAPPINGS.update({ + "MMDetDetectorProvider": MMDetDetectorProvider, + "MMDetLoader": impact.legacy_nodes.MMDetLoader, + "MaskPainter": impact.legacy_nodes.MaskPainter, + "SegsMaskCombine": impact.legacy_nodes.SegsMaskCombine, + "BboxDetectorForEach": impact.legacy_nodes.BboxDetectorForEach, + "SegmDetectorForEach": impact.legacy_nodes.SegmDetectorForEach, + "BboxDetectorCombined": impact.legacy_nodes.BboxDetectorCombined, + "SegmDetectorCombined": impact.legacy_nodes.SegmDetectorCombined, + }) + + NODE_DISPLAY_NAME_MAPPINGS.update({ + "MaskPainter": "MaskPainter (Deprecated)", + "MMDetLoader": "MMDetLoader (Legacy)", + "SegsMaskCombine": "SegsMaskCombine (Legacy)", + "BboxDetectorForEach": "BboxDetectorForEach (Legacy)", + "SegmDetectorForEach": "SegmDetectorForEach (Legacy)", + "BboxDetectorCombined": "BboxDetectorCombined (Legacy)", + "SegmDetectorCombined": "SegmDetectorCombined (Legacy)", + }) + +try: + import impact.subpack_nodes + + NODE_CLASS_MAPPINGS.update(impact.subpack_nodes.NODE_CLASS_MAPPINGS) + NODE_DISPLAY_NAME_MAPPINGS.update(impact.subpack_nodes.NODE_DISPLAY_NAME_MAPPINGS) +except Exception as e: + print("### ComfyUI-Impact-Pack: (IMPORT FAILED) Subpack\n") + print(" The module at the `custom_nodes/ComfyUI-Impact-Pack/impact_subpack` path appears to be incomplete.") + print(" Recommended to delete the path and restart ComfyUI.") + print(" If the issue persists, please report it to https://github.com/ltdrdata/ComfyUI-Impact-Pack/issues.") + print("\n---------------------------------") + traceback.print_exc() + print("---------------------------------\n") + +WEB_DIRECTORY = "js" +__all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS'] + + +try: + import cm_global + cm_global.register_extension('ComfyUI-Impact-Pack', + {'version': config.version_code, + 'name': 'Impact Pack', + 'nodes': set(NODE_CLASS_MAPPINGS.keys()), + 'description': 'This extension provides inpainting functionality based on the detector and detailer, along with convenient workflow features like wildcards and logics.', }) +except: + pass diff --git a/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards/put_wildcards_here b/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards/put_wildcards_here new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI-Impact-Pack/disable.py b/custom_nodes/ComfyUI-Impact-Pack/disable.py new file mode 100644 index 0000000000000000000000000000000000000000..2d62417c14128faca59ced13bbd83d5cd8708da3 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/disable.py @@ -0,0 +1,38 @@ +import os +import sys +import time +import platform +import shutil +import subprocess + +comfy_path = '../..' + +def rmtree(path): + retry_count = 3 + + while True: + try: + retry_count -= 1 + + if platform.system() == "Windows": + subprocess.check_call(['attrib', '-R', path + '\\*', '/S']) + + shutil.rmtree(path) + + return True + + except Exception as ex: + print(f"ex: {ex}") + time.sleep(3) + + if retry_count < 0: + raise ex + + print(f"Uninstall retry({retry_count})") + +js_dest_path = os.path.join(comfy_path, "web", "extensions", "impact-pack") + +if os.path.exists(js_dest_path): + rmtree(js_dest_path) + + diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact-pack.ini b/custom_nodes/ComfyUI-Impact-Pack/impact-pack.ini new file mode 100644 index 0000000000000000000000000000000000000000..8c9d2997d8294c336ec0bf37020731324940f75b --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact-pack.ini @@ -0,0 +1,8 @@ +[default] +dependency_version = 20 +mmdet_skip = True +sam_editor_cpu = False +sam_editor_model = sam_vit_b_01ec64.pth +custom_wildcards = /home/tiger/Magic-ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards +disable_gpu_opencv = True + diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/LICENSE b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..0ad25db4bd1d86c452db3f9602ccdbe172438f52 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/LICENSE @@ -0,0 +1,661 @@ + GNU AFFERO GENERAL PUBLIC LICENSE + Version 3, 19 November 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU Affero General Public License is a free, copyleft license for +software and other kinds of works, specifically designed to ensure +cooperation with the community in the case of network server software. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +our General Public Licenses are intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + Developers that use our General Public Licenses protect your rights +with two steps: (1) assert copyright on the software, and (2) offer +you this License which gives you legal permission to copy, distribute +and/or modify the software. + + A secondary benefit of defending all users' freedom is that +improvements made in alternate versions of the program, if they +receive widespread use, become available for other developers to +incorporate. Many developers of free software are heartened and +encouraged by the resulting cooperation. However, in the case of +software used on network servers, this result may fail to come about. +The GNU General Public License permits making a modified version and +letting the public access it on a server without ever releasing its +source code to the public. + + The GNU Affero General Public License is designed specifically to +ensure that, in such cases, the modified source code becomes available +to the community. It requires the operator of a network server to +provide the source code of the modified version running there to the +users of that server. Therefore, public use of a modified version, on +a publicly accessible server, gives the public access to the source +code of the modified version. + + An older license, called the Affero General Public License and +published by Affero, was designed to accomplish similar goals. This is +a different license, not a version of the Affero GPL, but Affero has +released a new version of the Affero GPL which permits relicensing under +this license. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU Affero General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Remote Network Interaction; Use with the GNU General Public License. + + Notwithstanding any other provision of this License, if you modify the +Program, your modified version must prominently offer all users +interacting with it remotely through a computer network (if your version +supports such interaction) an opportunity to receive the Corresponding +Source of your version by providing access to the Corresponding Source +from a network server at no charge, through some standard or customary +means of facilitating copying of software. This Corresponding Source +shall include the Corresponding Source for any work covered by version 3 +of the GNU General Public License that is incorporated pursuant to the +following paragraph. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the work with which it is combined will remain governed by version +3 of the GNU General Public License. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU Affero General Public License from time to time. Such new versions +will be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU Affero General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU Affero General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU Affero General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU Affero General Public License as published + by the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU Affero General Public License for more details. + + You should have received a copy of the GNU Affero General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If your software can interact with users remotely through a computer +network, you should also make sure that it provides a way for users to +get its source. For example, if your program is a web application, its +interface could display a "Source" link that leads users to an archive +of the code. There are many ways you could offer source, and different +solutions will be better for different programs; see section 13 for the +specific requirements. + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU AGPL, see +. diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/README.md b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/README.md new file mode 100644 index 0000000000000000000000000000000000000000..f3700a13fb9123c50280b8c30c949eabda29b01a --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/README.md @@ -0,0 +1,18 @@ +# ComfyUI-Impact-Subpack +This extension serves as a complement to the Impact Pack, offering features that are not deemed suitable for inclusion by default in the ComfyUI Impact Pack + +The nodes in this repository cannot be used standalone and depend on [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack). + +## Nodes +* UltralyticsDetectorProvider - This node provides an object detection detector based on Ultralystics. + * By using this Detector Provider, you can replace the existing mmdet-based detector. + + +## Credits + +ComfyUI/[ComfyUI](https://github.com/comfyanonymous/ComfyUI) - A powerful and modular stable diffusion GUI. + +Bing-su/[adetailer](https://github.com/Bing-su/adetailer/) - This repo sitoryprovides an object detection model and features based on Ultralystics. + +huggingface/Bingsu/[adetailer](https://huggingface.co/Bingsu/adetailer/tree/main) - This repository offers various models based on Ultralystics. +* You can download other models supported by the UltralyticsDetectorProvider from here. \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/impact/subcore.py b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/impact/subcore.py new file mode 100644 index 0000000000000000000000000000000000000000..ce5400e87e778f107b4273a3c7fb749b6686a09f --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/impact/subcore.py @@ -0,0 +1,213 @@ +from pathlib import Path +from PIL import Image + +import impact.core as core +import cv2 +import numpy as np +from torchvision.transforms.functional import to_pil_image +import torch + +try: + from ultralytics import YOLO +except Exception as e: + print(e) + print(f"\n!!!!!\n\n[ComfyUI-Impact-Subpack] If this error occurs, please check the following link:\n\thttps://github.com/ltdrdata/ComfyUI-Impact-Pack/blob/Main/troubleshooting/TROUBLESHOOTING.md\n\n!!!!!\n") + raise e + + +def load_yolo(model_path: str): + try: + return YOLO(model_path) + except ModuleNotFoundError: + # https://github.com/ultralytics/ultralytics/issues/3856 + YOLO("yolov8n.pt") + return YOLO(model_path) + + +def inference_bbox( + model, + image: Image.Image, + confidence: float = 0.3, + device: str = "", +): + pred = model(image, conf=confidence, device=device) + + bboxes = pred[0].boxes.xyxy.cpu().numpy() + cv2_image = np.array(image) + if len(cv2_image.shape) == 3: + cv2_image = cv2_image[:, :, ::-1].copy() # Convert RGB to BGR for cv2 processing + else: + # Handle the grayscale image here + # For example, you might want to convert it to a 3-channel grayscale image for consistency: + cv2_image = cv2.cvtColor(cv2_image, cv2.COLOR_GRAY2BGR) + cv2_gray = cv2.cvtColor(cv2_image, cv2.COLOR_BGR2GRAY) + + segms = [] + for x0, y0, x1, y1 in bboxes: + cv2_mask = np.zeros(cv2_gray.shape, np.uint8) + cv2.rectangle(cv2_mask, (int(x0), int(y0)), (int(x1), int(y1)), 255, -1) + cv2_mask_bool = cv2_mask.astype(bool) + segms.append(cv2_mask_bool) + + n, m = bboxes.shape + if n == 0: + return [[], [], [], []] + + results = [[], [], [], []] + for i in range(len(bboxes)): + results[0].append(pred[0].names[int(pred[0].boxes[i].cls.item())]) + results[1].append(bboxes[i]) + results[2].append(segms[i]) + results[3].append(pred[0].boxes[i].conf.cpu().numpy()) + + return results + + +def inference_segm( + model, + image: Image.Image, + confidence: float = 0.3, + device: str = "", +): + pred = model(image, conf=confidence, device=device) + + bboxes = pred[0].boxes.xyxy.cpu().numpy() + n, m = bboxes.shape + if n == 0: + return [[], [], [], []] + + # NOTE: masks.data will be None when n == 0 + segms = pred[0].masks.data.cpu().numpy() + + results = [[], [], [], []] + for i in range(len(bboxes)): + results[0].append(pred[0].names[int(pred[0].boxes[i].cls.item())]) + results[1].append(bboxes[i]) + + mask = torch.from_numpy(segms[i]) + scaled_mask = torch.nn.functional.interpolate(mask.unsqueeze(0).unsqueeze(0), size=(image.size[1], image.size[0]), + mode='bilinear', align_corners=False) + scaled_mask = scaled_mask.squeeze().squeeze() + + results[2].append(scaled_mask.numpy()) + results[3].append(pred[0].boxes[i].conf.cpu().numpy()) + + return results + + +class UltraBBoxDetector: + bbox_model = None + + def __init__(self, bbox_model): + self.bbox_model = bbox_model + + def detect(self, image, threshold, dilation, crop_factor, drop_size=1, detailer_hook=None): + drop_size = max(drop_size, 1) + detected_results = inference_bbox(self.bbox_model, core.tensor2pil(image), threshold) + segmasks = core.create_segmasks(detected_results) + + if dilation > 0: + segmasks = core.dilate_masks(segmasks, dilation) + + items = [] + h = image.shape[1] + w = image.shape[2] + + for x, label in zip(segmasks, detected_results[0]): + item_bbox = x[0] + item_mask = x[1] + + y1, x1, y2, x2 = item_bbox + + if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue + crop_region = core.make_crop_region(w, h, item_bbox, crop_factor) + + if detailer_hook is not None: + crop_region = detailer_hook.post_crop_region(w, h, item_bbox, crop_region) + + cropped_image = core.crop_image(image, crop_region) + cropped_mask = core.crop_ndarray2(item_mask, crop_region) + confidence = x[2] + # bbox_size = (item_bbox[2]-item_bbox[0],item_bbox[3]-item_bbox[1]) # (w,h) + + item = core.SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox, label, None) + + items.append(item) + + shape = image.shape[1], image.shape[2] + segs = shape, items + + if detailer_hook is not None and hasattr(detailer_hook, "post_detection"): + segs = detailer_hook.post_detection(segs) + + return segs + + def detect_combined(self, image, threshold, dilation): + detected_results = inference_bbox(self.bbox_model, core.tensor2pil(image), threshold) + segmasks = core.create_segmasks(detected_results) + if dilation > 0: + segmasks = core.dilate_masks(segmasks, dilation) + + return core.combine_masks(segmasks) + + def setAux(self, x): + pass + + +class UltraSegmDetector: + bbox_model = None + + def __init__(self, bbox_model): + self.bbox_model = bbox_model + + def detect(self, image, threshold, dilation, crop_factor, drop_size=1, detailer_hook=None): + drop_size = max(drop_size, 1) + detected_results = inference_segm(self.bbox_model, core.tensor2pil(image), threshold) + segmasks = core.create_segmasks(detected_results) + + if dilation > 0: + segmasks = core.dilate_masks(segmasks, dilation) + + items = [] + h = image.shape[1] + w = image.shape[2] + + for x, label in zip(segmasks, detected_results[0]): + item_bbox = x[0] + item_mask = x[1] + + y1, x1, y2, x2 = item_bbox + + if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue + crop_region = core.make_crop_region(w, h, item_bbox, crop_factor) + + if detailer_hook is not None: + crop_region = detailer_hook.post_crop_region(w, h, item_bbox, crop_region) + + cropped_image = core.crop_image(image, crop_region) + cropped_mask = core.crop_ndarray2(item_mask, crop_region) + confidence = x[2] + # bbox_size = (item_bbox[2]-item_bbox[0],item_bbox[3]-item_bbox[1]) # (w,h) + + item = core.SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox, label, None) + + items.append(item) + + shape = image.shape[1], image.shape[2] + segs = shape, items + + if detailer_hook is not None and hasattr(detailer_hook, "post_detection"): + segs = detailer_hook.post_detection(segs) + + return segs + + def detect_combined(self, image, threshold, dilation): + detected_results = inference_segm(self.bbox_model, core.tensor2pil(image), threshold) + segmasks = core.create_segmasks(detected_results) + if dilation > 0: + segmasks = core.dilate_masks(segmasks, dilation) + + return core.combine_masks(segmasks) + + def setAux(self, x): + pass \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/impact/subpack_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/impact/subpack_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..72d7109c548a584a380752b912f4e959b267ec66 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/impact/subpack_nodes.py @@ -0,0 +1,45 @@ +import os +import folder_paths +import impact.core as core +import impact.subcore as subcore +from impact.utils import add_folder_path_and_extensions + +version_code = 20 + +print(f"### Loading: ComfyUI-Impact-Pack (Subpack: V0.4)") + +model_path = folder_paths.models_dir +add_folder_path_and_extensions("ultralytics_bbox", [os.path.join(model_path, "ultralytics", "bbox")], folder_paths.supported_pt_extensions) +add_folder_path_and_extensions("ultralytics_segm", [os.path.join(model_path, "ultralytics", "segm")], folder_paths.supported_pt_extensions) +add_folder_path_and_extensions("ultralytics", [os.path.join(model_path, "ultralytics")], folder_paths.supported_pt_extensions) + + +class UltralyticsDetectorProvider: + @classmethod + def INPUT_TYPES(s): + bboxs = ["bbox/"+x for x in folder_paths.get_filename_list("ultralytics_bbox")] + segms = ["segm/"+x for x in folder_paths.get_filename_list("ultralytics_segm")] + return {"required": {"model_name": (bboxs + segms, )}} + RETURN_TYPES = ("BBOX_DETECTOR", "SEGM_DETECTOR") + FUNCTION = "doit" + + CATEGORY = "ImpactPack" + + def doit(self, model_name): + model_path = folder_paths.get_full_path("ultralytics", model_name) + model = subcore.load_yolo(model_path) + + if model_name.startswith("bbox"): + return subcore.UltraBBoxDetector(model), core.NO_SEGM_DETECTOR() + else: + return subcore.UltraBBoxDetector(model), subcore.UltraSegmDetector(model) + + +NODE_CLASS_MAPPINGS = { + "UltralyticsDetectorProvider": UltralyticsDetectorProvider +} + + +NODE_DISPLAY_NAME_MAPPINGS = { + +} diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/install.py b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/install.py new file mode 100644 index 0000000000000000000000000000000000000000..9145fbe0f1d52192d507389f8158b64ca1b9fc64 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/install.py @@ -0,0 +1,32 @@ +import os +import sys +from torchvision.datasets.utils import download_url + +subpack_path = os.path.join(os.path.dirname(__file__)) +comfy_path = os.path.join(subpack_path, '..', '..', '..') + +sys.path.append(comfy_path) + +import folder_paths +model_path = folder_paths.models_dir +ultralytics_bbox_path = os.path.join(model_path, "ultralytics", "bbox") +ultralytics_segm_path = os.path.join(model_path, "ultralytics", "segm") + +if not os.path.exists(os.path.join(subpack_path, '..', '..', 'skip_download_model')): + if not os.path.exists(ultralytics_bbox_path): + os.makedirs(ultralytics_bbox_path) + + if not os.path.exists(ultralytics_segm_path): + os.makedirs(ultralytics_segm_path) + + if not os.path.exists(os.path.join(ultralytics_bbox_path, "face_yolov8m.pt")): + download_url("https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8m.pt", + ultralytics_bbox_path) + + if not os.path.exists(os.path.join(ultralytics_bbox_path, "hand_yolov8s.pt")): + download_url("https://huggingface.co/Bingsu/adetailer/resolve/main/hand_yolov8s.pt", + ultralytics_bbox_path) + + if not os.path.exists(os.path.join(ultralytics_segm_path, "person_yolov8m-seg.pt")): + download_url("https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8m-seg.pt", + ultralytics_segm_path) diff --git a/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/requirements.txt b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..8d0a784681f77b24bf3c98efc34c9e5091862aad --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/impact_subpack/requirements.txt @@ -0,0 +1 @@ +ultralytics!=8.0.177 \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/install.py b/custom_nodes/ComfyUI-Impact-Pack/install.py new file mode 100644 index 0000000000000000000000000000000000000000..3cae095b8c6d9f21cca354c3b579b08e4f8ca937 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/install.py @@ -0,0 +1,285 @@ +import os +import shutil +import sys +import subprocess +import threading +import locale +import traceback +import re + + +if sys.argv[0] == 'install.py': + sys.path.append('.') # for portable version + + +impact_path = os.path.join(os.path.dirname(__file__), "modules") +old_subpack_path = os.path.join(os.path.dirname(__file__), "subpack") +subpack_path = os.path.join(os.path.dirname(__file__), "impact_subpack") +subpack_repo = "https://github.com/ltdrdata/ComfyUI-Impact-Subpack" +comfy_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')) + + +sys.path.append(impact_path) +sys.path.append(comfy_path) + + +# --- +def handle_stream(stream, is_stdout): + stream.reconfigure(encoding=locale.getpreferredencoding(), errors='replace') + + for msg in stream: + if is_stdout: + print(msg, end="", file=sys.stdout) + else: + print(msg, end="", file=sys.stderr) + + +def process_wrap(cmd_str, cwd=None, handler=None): + print(f"[Impact Pack] EXECUTE: {cmd_str} in '{cwd}'") + process = subprocess.Popen(cmd_str, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1) + + if handler is None: + handler = handle_stream + + stdout_thread = threading.Thread(target=handler, args=(process.stdout, True)) + stderr_thread = threading.Thread(target=handler, args=(process.stderr, False)) + + stdout_thread.start() + stderr_thread.start() + + stdout_thread.join() + stderr_thread.join() + + return process.wait() +# --- + + +pip_list = None + + +def get_installed_packages(): + global pip_list + + if pip_list is None: + try: + result = subprocess.check_output([sys.executable, '-m', 'pip', 'list'], universal_newlines=True) + pip_list = set([line.split()[0].lower() for line in result.split('\n') if line.strip()]) + except subprocess.CalledProcessError as e: + print(f"[ComfyUI-Manager] Failed to retrieve the information of installed pip packages.") + return set() + + return pip_list + + +def is_installed(name): + name = name.strip() + pattern = r'([^<>!=]+)([<>!=]=?)' + match = re.search(pattern, name) + + if match: + name = match.group(1) + + result = name.lower() in get_installed_packages() + return result + + +def is_requirements_installed(file_path): + print(f"req_path: {file_path}") + if os.path.exists(file_path): + with open(file_path, 'r') as file: + lines = file.readlines() + for line in lines: + if not is_installed(line): + return False + + return True + +try: + import platform + import folder_paths + from torchvision.datasets.utils import download_url + import impact.config + + + print("### ComfyUI-Impact-Pack: Check dependencies") + + if "python_embeded" in sys.executable or "python_embedded" in sys.executable: + pip_install = [sys.executable, '-s', '-m', 'pip', 'install'] + mim_install = [sys.executable, '-s', '-m', 'mim', 'install'] + else: + pip_install = [sys.executable, '-m', 'pip', 'install'] + mim_install = [sys.executable, '-m', 'mim', 'install'] + + + def ensure_subpack(): + import git + if os.path.exists(subpack_path): + try: + repo = git.Repo(subpack_path) + repo.remotes.origin.pull() + except: + traceback.print_exc() + if platform.system() == 'Windows': + print(f"[ComfyUI-Impact-Pack] Please turn off ComfyUI and remove '{subpack_path}' and restart ComfyUI.") + else: + shutil.rmtree(subpack_path) + git.Repo.clone_from(subpack_repo, subpack_path) + else: + git.Repo.clone_from(subpack_repo, subpack_path) + + if os.path.exists(old_subpack_path): + shutil.rmtree(old_subpack_path) + + + def remove_olds(): + global comfy_path + + comfy_path = os.path.dirname(folder_paths.__file__) + custom_nodes_path = os.path.join(comfy_path, "custom_nodes") + old_ini_path = os.path.join(custom_nodes_path, "impact-pack.ini") + old_py_path = os.path.join(custom_nodes_path, "comfyui-impact-pack.py") + + if os.path.exists(impact.config.old_config_path): + impact.config.get_config()['mmdet_skip'] = False + os.remove(impact.config.old_config_path) + + if os.path.exists(old_ini_path): + print(f"Delete legacy file: {old_ini_path}") + os.remove(old_ini_path) + + if os.path.exists(old_py_path): + print(f"Delete legacy file: {old_py_path}") + os.remove(old_py_path) + + + def ensure_pip_packages_first(): + subpack_req = os.path.join(subpack_path, "requirements.txt") + if os.path.exists(subpack_req) and not is_requirements_installed(subpack_req): + process_wrap(pip_install + ['-r', 'requirements.txt'], cwd=subpack_path) + + if not impact.config.get_config()['mmdet_skip']: + process_wrap(pip_install + ['openmim']) + + try: + import pycocotools + except Exception: + if platform.system() not in ["Windows"] or platform.machine() not in ["AMD64", "x86_64"]: + print(f"Your system is {platform.system()}; !! You need to install 'libpython3-dev' for this step. !!") + + process_wrap(pip_install + ['pycocotools']) + else: + pycocotools = { + (3, 8): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp38-cp38-win_amd64.whl", + (3, 9): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp39-cp39-win_amd64.whl", + (3, 10): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp310-cp310-win_amd64.whl", + (3, 11): "https://github.com/Bing-su/dddetailer/releases/download/pycocotools/pycocotools-2.0.6-cp311-cp311-win_amd64.whl", + } + + version = sys.version_info[:2] + url = pycocotools[version] + process_wrap(pip_install + [url]) + + + def ensure_pip_packages_last(): + my_path = os.path.dirname(__file__) + requirements_path = os.path.join(my_path, "requirements.txt") + + if not is_requirements_installed(requirements_path): + process_wrap(pip_install + ['-r', requirements_path]) + + # fallback + try: + import segment_anything + from skimage.measure import label, regionprops + import piexif + except Exception: + process_wrap(pip_install + ['-r', requirements_path]) + + # !! cv2 importing test must be very last !! + try: + import cv2 + except Exception: + try: + if not is_installed('opencv-python'): + process_wrap(pip_install + ['opencv-python']) + if not is_installed('opencv-python-headless'): + process_wrap(pip_install + ['opencv-python-headless']) + except: + print(f"[ERROR] ComfyUI-Impact-Pack: failed to install 'opencv-python'. Please, install manually.") + + def ensure_mmdet_package(): + try: + import mmcv + import mmdet + from mmdet.evaluation import get_classes + except Exception: + process_wrap(pip_install + ['opendatalab==0.0.9']) + process_wrap(pip_install + ['-U', 'openmim']) + process_wrap(mim_install + ['mmcv>=2.0.0rc4, <2.1.0']) + process_wrap(mim_install + ['mmdet==3.0.0']) + process_wrap(mim_install + ['mmengine==0.7.4']) + + + def install(): + remove_olds() + + subpack_install_script = os.path.join(subpack_path, "install.py") + + print(f"### ComfyUI-Impact-Pack: Updating subpack") + try: + import git + except Exception: + if not is_installed('GitPython'): + process_wrap(pip_install + ['GitPython']) + + ensure_subpack() # The installation of the subpack must take place before ensure_pip. cv2 triggers a permission error. + + if os.path.exists(subpack_install_script): + process_wrap([sys.executable, 'install.py'], cwd=subpack_path) + if not is_requirements_installed(os.path.join(subpack_path, 'requirements.txt')): + process_wrap(pip_install + ['-r', 'requirements.txt'], cwd=subpack_path) + else: + print(f"### ComfyUI-Impact-Pack: (Install Failed) Subpack\nFile not found: `{subpack_install_script}`") + + ensure_pip_packages_first() + + if not impact.config.get_config()['mmdet_skip']: + ensure_mmdet_package() + + ensure_pip_packages_last() + + # Download model + print("### ComfyUI-Impact-Pack: Check basic models") + + model_path = folder_paths.models_dir + + bbox_path = os.path.join(model_path, "mmdets", "bbox") + sam_path = os.path.join(model_path, "sams") + onnx_path = os.path.join(model_path, "onnx") + + if not os.path.exists(os.path.join(os.path.dirname(__file__), '..', 'skip_download_model')): + if not os.path.exists(bbox_path): + os.makedirs(bbox_path) + + if not impact.config.get_config()['mmdet_skip']: + if not os.path.exists(os.path.join(bbox_path, "mmdet_anime-face_yolov3.pth")): + download_url("https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth", bbox_path) + + if not os.path.exists(os.path.join(bbox_path, "mmdet_anime-face_yolov3.py")): + download_url("https://raw.githubusercontent.com/Bing-su/dddetailer/master/config/mmdet_anime-face_yolov3.py", bbox_path) + + if not os.path.exists(os.path.join(sam_path, "sam_vit_b_01ec64.pth")): + download_url("https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth", sam_path) + + if not os.path.exists(onnx_path): + print(f"### ComfyUI-Impact-Pack: onnx model directory created ({onnx_path})") + os.mkdir(onnx_path) + + impact.config.write_config() + + + install() + +except Exception as e: + print("[ERROR] ComfyUI-Impact-Pack: Dependency installation has failed. Please install manually.") + traceback.print_exc() diff --git a/custom_nodes/ComfyUI-Impact-Pack/js/comboBoolMigration.js b/custom_nodes/ComfyUI-Impact-Pack/js/comboBoolMigration.js new file mode 100644 index 0000000000000000000000000000000000000000..fa5521682b0e2454148b940ef77806c690cebc87 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/js/comboBoolMigration.js @@ -0,0 +1,35 @@ +import { ComfyApp, app } from "../../scripts/app.js"; + +let conflict_check = undefined; + +app.registerExtension({ + name: "Comfy.impact.comboBoolMigration", + + nodeCreated(node, app) { + for(let i in node.widgets) { + let widget = node.widgets[i]; + + if(conflict_check == undefined) { + conflict_check = !!app.extensions.find((ext) => ext.name === "Comfy.comboBoolMigration"); + } + + if(conflict_check) + return; + + if(widget.type == "toggle") { + let value = widget.value; + + var v = Object.getOwnPropertyDescriptor(widget, 'value'); + if(!v) { + Object.defineProperty(widget, "value", { + set: (value) => { + delete widget.value; + widget.value = value == true || value == widget.options.on; + }, + get: () => { return value; } + }); + } + } + } + } +}); diff --git a/custom_nodes/ComfyUI-Impact-Pack/js/common.js b/custom_nodes/ComfyUI-Impact-Pack/js/common.js new file mode 100644 index 0000000000000000000000000000000000000000..b60f6c3159dec577e481a2552b695cc5c2b35341 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/js/common.js @@ -0,0 +1,95 @@ +import { api } from "../../scripts/api.js"; +import { app } from "../../scripts/app.js"; + +let original_show = app.ui.dialog.show; + +function dialog_show_wrapper(html) { + if (typeof html === "string") { + if(html.includes("IMPACT-PACK-SIGNAL: STOP CONTROL BRIDGE")) { + return; + } + + this.textElement.innerHTML = html; + } else { + this.textElement.replaceChildren(html); + } + this.element.style.display = "flex"; +} + +app.ui.dialog.show = dialog_show_wrapper; + + +function nodeFeedbackHandler(event) { + let nodes = app.graph._nodes_by_id; + let node = nodes[event.detail.node_id]; + if(node) { + const w = node.widgets.find((w) => event.detail.widget_name === w.name); + if(w) { + w.value = event.detail.value; + } + } +} + +api.addEventListener("impact-node-feedback", nodeFeedbackHandler); + + +function setMuteState(event) { + let nodes = app.graph._nodes_by_id; + let node = nodes[event.detail.node_id]; + if(node) { + if(event.detail.is_active) + node.mode = 0; + else + node.mode = 2; + } +} + +api.addEventListener("impact-node-mute-state", setMuteState); + + +async function bridgeContinue(event) { + let nodes = app.graph._nodes_by_id; + let node = nodes[event.detail.node_id]; + if(node) { + const mutes = new Set(event.detail.mutes); + const actives = new Set(event.detail.actives); + const bypasses = new Set(event.detail.bypasses); + + for(let i in app.graph._nodes_by_id) { + let this_node = app.graph._nodes_by_id[i]; + if(mutes.has(i)) { + this_node.mode = 2; + } + else if(actives.has(i)) { + this_node.mode = 0; + } + else if(bypasses.has(i)) { + this_node.mode = 4; + } + } + + await app.queuePrompt(0, 1); + } +} + +api.addEventListener("impact-bridge-continue", bridgeContinue); + + +function addQueue(event) { + app.queuePrompt(0, 1); +} + +api.addEventListener("impact-add-queue", addQueue); + + +function refreshPreview(event) { + let node_id = event.detail.node_id; + let item = event.detail.item; + let img = new Image(); + img.src = `/view?filename=${item.filename}&subfolder=${item.subfolder}&type=${item.type}&no-cache=${Date.now()}`; + let node = app.graph._nodes_by_id[node_id]; + if(node) + node.imgs = [img]; +} + +api.addEventListener("impact-preview", refreshPreview); diff --git a/custom_nodes/ComfyUI-Impact-Pack/js/impact-image-util.js b/custom_nodes/ComfyUI-Impact-Pack/js/impact-image-util.js new file mode 100644 index 0000000000000000000000000000000000000000..216cd165f7a024587a0d9902d2b34c503d7d24a0 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/js/impact-image-util.js @@ -0,0 +1,229 @@ +import { ComfyApp, app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js"; + +function load_image(str) { + let base64String = canvas.toDataURL('image/png'); + let img = new Image(); + img.src = base64String; +} + +function getFileItem(baseType, path) { + try { + let pathType = baseType; + + if (path.endsWith("[output]")) { + pathType = "output"; + path = path.slice(0, -9); + } else if (path.endsWith("[input]")) { + pathType = "input"; + path = path.slice(0, -8); + } else if (path.endsWith("[temp]")) { + pathType = "temp"; + path = path.slice(0, -7); + } + + const subfolder = path.substring(0, path.lastIndexOf('/')); + const filename = path.substring(path.lastIndexOf('/') + 1); + + return { + filename: filename, + subfolder: subfolder, + type: pathType + }; + } + catch(exception) { + return null; + } +} + +async function loadImageFromUrl(image, node_id, v, need_to_load) { + let item = getFileItem('temp', v); + + if(item) { + let params = `?node_id=${node_id}&filename=${item.filename}&type=${item.type}&subfolder=${item.subfolder}`; + + let res = await api.fetchApi('/impact/set/pb_id_image'+params, { cache: "no-store" }); + if(res.status == 200) { + let pb_id = await res.text(); + if(need_to_load) {; + image.src = `view?filename=${item.filename}&type=${item.type}&subfolder=${item.subfolder}`; + } + return pb_id; + } + else { + return `$${node_id}-0`; + } + } + else { + return `$${node_id}-0`; + } +} + +async function loadImageFromId(image, v) { + let res = await api.fetchApi('/impact/get/pb_id_image?id='+v, { cache: "no-store" }); + if(res.status == 200) { + let item = await res.json(); + image.src = `view?filename=${item.filename}&type=${item.type}&subfolder=${item.subfolder}`; + return true; + } + + return false; +} + +app.registerExtension({ + name: "Comfy.Impact.img", + + nodeCreated(node, app) { + if(node.comfyClass == "PreviewBridge" || node.comfyClass == "PreviewBridgeLatent") { + let w = node.widgets.find(obj => obj.name === 'image'); + node._imgs = [new Image()]; + node.imageIndex = 0; + + Object.defineProperty(w, 'value', { + async set(v) { + if(w._lock) + return; + + const stackTrace = new Error().stack; + if(stackTrace.includes('presetText.js')) + return; + + var image = new Image(); + if(v && v.constructor == String && v.startsWith('$')) { + // from node feedback + let need_to_load = node._imgs[0].src == ''; + if(await loadImageFromId(image, v, need_to_load)) { + w._value = v; + if(node._imgs[0].src == '') { + node._imgs = [image]; + } + } + else { + w._value = `$${node.id}-0`; + } + } + else { + // from clipspace + w._lock = true; + w._value = await loadImageFromUrl(image, node.id, v, false); + w._lock = false; + } + }, + get() { + if(w._value == undefined) { + w._value = `$${node.id}-0`; + } + return w._value; + } + }); + + Object.defineProperty(node, 'imgs', { + set(v) { + const stackTrace = new Error().stack; + if(v && v.length == 0) + return; + else if(stackTrace.includes('pasteFromClipspace')) { + let sp = new URLSearchParams(v[0].src.split("?")[1]); + let str = ""; + if(sp.get('subfolder')) { + str += sp.get('subfolder') + '/'; + } + str += `${sp.get("filename")} [${sp.get("type")}]`; + + w.value = str; + } + + node._imgs = v; + }, + get() { + return node._imgs; + } + }); + } + + if(node.comfyClass == "ImageReceiver") { + let path_widget = node.widgets.find(obj => obj.name === 'image'); + let w = node.widgets.find(obj => obj.name === 'image_data'); + let stw_widget = node.widgets.find(obj => obj.name === 'save_to_workflow'); + w._value = ""; + + Object.defineProperty(w, 'value', { + set(v) { + if(v != '[IMAGE DATA]') + w._value = v; + }, + get() { + const stackTrace = new Error().stack; + if(!stackTrace.includes('draw') && !stackTrace.includes('graphToPrompt') && stackTrace.includes('app.js')) { + return "[IMAGE DATA]"; + } + else { + if(stw_widget.value) + return w._value; + else + return ""; + } + } + }); + + let set_img_act = (v) => { + node._img = v; + var canvas = document.createElement('canvas'); + canvas.width = v[0].width; + canvas.height = v[0].height; + + var context = canvas.getContext('2d'); + context.drawImage(v[0], 0, 0, v[0].width, v[0].height); + + var base64Image = canvas.toDataURL('image/png'); + w.value = base64Image; + }; + + Object.defineProperty(node, 'imgs', { + set(v) { + if (!v[0].complete) { + let orig_onload = v[0].onload; + v[0].onload = function(v2) { + if(orig_onload) + orig_onload(); + set_img_act(v); + }; + } + else { + set_img_act(v); + } + }, + get() { + if(this._img == undefined && w.value != '') { + this._img = [new Image()]; + if(stw_widget.value && w.value != '[IMAGE DATA]') + this._img[0].src = w.value; + } + else if(this._img == undefined && path_widget.value) { + let image = new Image(); + image.src = path_widget.value; + + try { + let item = getFileItem('temp', path_widget.value); + let params = `?filename=${item.filename}&type=${item.type}&subfolder=${item.subfolder}`; + + let res = api.fetchApi('/view/validate'+params, { cache: "no-store" }).then(response => response); + if(res.status == 200) { + image.src = 'view'+params; + } + + this._img = [new Image()]; // placeholder + image.onload = function(v) { + set_img_act([image]); + }; + } + catch { + + } + } + return this._img; + } + }); + } + } +}) diff --git a/custom_nodes/ComfyUI-Impact-Pack/js/impact-pack.js b/custom_nodes/ComfyUI-Impact-Pack/js/impact-pack.js new file mode 100644 index 0000000000000000000000000000000000000000..0eb4fbf7f96bbe0b668d4e2ef116449ca36e6f57 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/js/impact-pack.js @@ -0,0 +1,795 @@ +import { ComfyApp, app } from "../../scripts/app.js"; +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { api } from "../../scripts/api.js"; + +let wildcards_list = []; +async function load_wildcards() { + let res = await api.fetchApi('/impact/wildcards/list'); + let data = await res.json(); + wildcards_list = data.data; +} + +load_wildcards(); + +export function get_wildcards_list() { + return wildcards_list; +} + +// temporary implementation (copying from https://github.com/pythongosssss/ComfyUI-WD14-Tagger) +// I think this should be included into master!! +class ImpactProgressBadge { + constructor() { + if (!window.__progress_badge__) { + window.__progress_badge__ = Symbol("__impact_progress_badge__"); + } + this.symbol = window.__progress_badge__; + } + + getState(node) { + return node[this.symbol] || {}; + } + + setState(node, state) { + node[this.symbol] = state; + app.canvas.setDirty(true); + } + + addStatusHandler(nodeType) { + if (nodeType[this.symbol]?.statusTagHandler) { + return; + } + if (!nodeType[this.symbol]) { + nodeType[this.symbol] = {}; + } + nodeType[this.symbol] = { + statusTagHandler: true, + }; + + api.addEventListener("impact/update_status", ({ detail }) => { + let { node, progress, text } = detail; + const n = app.graph.getNodeById(+(node || app.runningNodeId)); + if (!n) return; + const state = this.getState(n); + state.status = Object.assign(state.status || {}, { progress: text ? progress : null, text: text || null }); + this.setState(n, state); + }); + + const self = this; + const onDrawForeground = nodeType.prototype.onDrawForeground; + nodeType.prototype.onDrawForeground = function (ctx) { + const r = onDrawForeground?.apply?.(this, arguments); + const state = self.getState(this); + if (!state?.status?.text) { + return r; + } + + const { fgColor, bgColor, text, progress, progressColor } = { ...state.status }; + + ctx.save(); + ctx.font = "12px sans-serif"; + const sz = ctx.measureText(text); + ctx.fillStyle = bgColor || "dodgerblue"; + ctx.beginPath(); + ctx.roundRect(0, -LiteGraph.NODE_TITLE_HEIGHT - 20, sz.width + 12, 20, 5); + ctx.fill(); + + if (progress) { + ctx.fillStyle = progressColor || "green"; + ctx.beginPath(); + ctx.roundRect(0, -LiteGraph.NODE_TITLE_HEIGHT - 20, (sz.width + 12) * progress, 20, 5); + ctx.fill(); + } + + ctx.fillStyle = fgColor || "#fff"; + ctx.fillText(text, 6, -LiteGraph.NODE_TITLE_HEIGHT - 6); + ctx.restore(); + return r; + }; + } +} + +const input_tracking = {}; +const input_dirty = {}; +const output_tracking = {}; + +function progressExecuteHandler(event) { + if(event.detail.output.aux){ + const id = event.detail.node; + if(input_tracking.hasOwnProperty(id)) { + if(input_tracking.hasOwnProperty(id) && input_tracking[id][0] != event.detail.output.aux[0]) { + input_dirty[id] = true; + } + else{ + + } + } + + input_tracking[id] = event.detail.output.aux; + } +} + +function imgSendHandler(event) { + if(event.detail.images.length > 0){ + let data = event.detail.images[0]; + let filename = `${data.filename} [${data.type}]`; + + let nodes = app.graph._nodes; + for(let i in nodes) { + if(nodes[i].type == 'ImageReceiver') { + if(nodes[i].widgets[1].value == event.detail.link_id) { + if(data.subfolder) + nodes[i].widgets[0].value = `${data.subfolder}/${data.filename} [${data.type}]`; + else + nodes[i].widgets[0].value = `${data.filename} [${data.type}]`; + + let img = new Image(); + img.onload = (event) => { + nodes[i].imgs = [img]; + nodes[i].size[1] = Math.max(200, nodes[i].size[1]); + app.canvas.setDirty(true); + }; + img.src = `/view?filename=${data.filename}&type=${data.type}&subfolder=${data.subfolder}`+app.getPreviewFormatParam(); + } + } + } + } +} + + +function latentSendHandler(event) { + if(event.detail.images.length > 0){ + let data = event.detail.images[0]; + let filename = `${data.filename} [${data.type}]`; + + let nodes = app.graph._nodes; + for(let i in nodes) { + if(nodes[i].type == 'LatentReceiver') { + if(nodes[i].widgets[1].value == event.detail.link_id) { + if(data.subfolder) + nodes[i].widgets[0].value = `${data.subfolder}/${data.filename} [${data.type}]`; + else + nodes[i].widgets[0].value = `${data.filename} [${data.type}]`; + + let img = new Image(); + img.src = `/view?filename=${data.filename}&type=${data.type}&subfolder=${data.subfolder}`+app.getPreviewFormatParam(); + nodes[i].imgs = [img]; + nodes[i].size[1] = Math.max(200, nodes[i].size[1]); + } + } + } + } +} + + +function valueSendHandler(event) { + let nodes = app.graph._nodes; + for(let i in nodes) { + if(nodes[i].type == 'ImpactValueReceiver') { + if(nodes[i].widgets[2].value == event.detail.link_id) { + nodes[i].widgets[1].value = event.detail.value; + + let typ = typeof event.detail.value; + if(typ == 'string') { + nodes[i].widgets[0].value = "STRING"; + } + else if(typ == "boolean") { + nodes[i].widgets[0].value = "BOOLEAN"; + } + else if(typ != "number") { + nodes[i].widgets[0].value = typeof event.detail.value; + } + else if(Number.isInteger(event.detail.value)) { + nodes[i].widgets[0].value = "INT"; + } + else { + nodes[i].widgets[0].value = "FLOAT"; + } + } + } + } +} + + +const impactProgressBadge = new ImpactProgressBadge(); + +api.addEventListener("stop-iteration", () => { + document.getElementById("autoQueueCheckbox").checked = false; +}); +api.addEventListener("value-send", valueSendHandler); +api.addEventListener("img-send", imgSendHandler); +api.addEventListener("latent-send", latentSendHandler); +api.addEventListener("executed", progressExecuteHandler); + +app.registerExtension({ + name: "Comfy.Impack", + loadedGraphNode(node, app) { + if (node.comfyClass == "MaskPainter") { + input_dirty[node.id + ""] = true; + } + }, + + async beforeRegisterNodeDef(nodeType, nodeData, app) { + if (nodeData.name == "IterativeLatentUpscale" || nodeData.name == "IterativeImageUpscale" + || nodeData.name == "RegionalSampler"|| nodeData.name == "RegionalSamplerAdvanced") { + impactProgressBadge.addStatusHandler(nodeType); + } + + if(nodeData.name == "ImpactControlBridge") { + const onConnectionsChange = nodeType.prototype.onConnectionsChange; + nodeType.prototype.onConnectionsChange = function (type, index, connected, link_info) { + if(!link_info || this.inputs[0].type != '*') + return; + + // assign type + let slot_type = '*'; + + if(type == 2) { + slot_type = link_info.type; + } + else { + const node = app.graph.getNodeById(link_info.origin_id); + slot_type = node.outputs[link_info.origin_slot].type; + } + + this.inputs[0].type = slot_type; + this.outputs[0].type = slot_type; + this.outputs[0].label = slot_type; + } + } + + if(nodeData.name == "ImpactConditionalBranch" || nodeData.name == "ImpactConditionalBranchSelMode") { + const onConnectionsChange = nodeType.prototype.onConnectionsChange; + nodeType.prototype.onConnectionsChange = function (type, index, connected, link_info) { + if(!link_info || this.inputs[0].type != '*') + return; + + if(index >= 2) + return; + + // assign type + let slot_type = '*'; + + if(type == 2) { + slot_type = link_info.type; + } + else { + const node = app.graph.getNodeById(link_info.origin_id); + slot_type = node.outputs[link_info.origin_slot].type; + } + + this.inputs[0].type = slot_type; + this.inputs[1].type = slot_type; + this.outputs[0].type = slot_type; + this.outputs[0].label = slot_type; + } + } + + if(nodeData.name == "ImpactCompare") { + const onConnectionsChange = nodeType.prototype.onConnectionsChange; + nodeType.prototype.onConnectionsChange = function (type, index, connected, link_info) { + if(!link_info || this.inputs[0].type != '*' || type == 2) + return; + + // assign type + const node = app.graph.getNodeById(link_info.origin_id); + let slot_type = node.outputs[link_info.origin_slot].type; + + this.inputs[0].type = slot_type; + this.inputs[1].type = slot_type; + } + } + + if(nodeData.name === 'ImpactInversedSwitch') { + nodeData.output = ['*']; + nodeData.output_is_list = [false]; + nodeData.output_name = ['output1']; + + const onConnectionsChange = nodeType.prototype.onConnectionsChange; + nodeType.prototype.onConnectionsChange = function (type, index, connected, link_info) { + if(!link_info) + return; + + if(type == 2) { + // connect output + if(connected){ + if(app.graph._nodes_by_id[link_info.target_id].type == 'Reroute') { + app.graph._nodes_by_id[link_info.target_id].disconnectInput(link_info.target_slot); + } + + if(this.outputs[0].type == '*'){ + if(link_info.type == '*') { + app.graph._nodes_by_id[link_info.target_id].disconnectInput(link_info.target_slot); + } + else { + // propagate type + this.outputs[0].type = link_info.type; + this.outputs[0].name = link_info.type; + + for(let i in this.inputs) { + if(this.inputs[i].name != 'select') + this.inputs[i].type = link_info.type; + } + } + } + } + } + else { + if(app.graph._nodes_by_id[link_info.origin_id].type == 'Reroute') + this.disconnectInput(link_info.target_slot); + + // connect input + if(this.inputs[0].type == '*'){ + const node = app.graph.getNodeById(link_info.origin_id); + let origin_type = node.outputs[link_info.origin_slot].type; + + if(origin_type == '*') { + this.disconnectInput(link_info.target_slot); + return; + } + + for(let i in this.inputs) { + if(this.inputs[i].name != 'select') + this.inputs[i].type = origin_type; + } + + this.outputs[0].type = origin_type; + this.outputs[0].name = origin_type; + } + + return; + } + + if (!connected && this.outputs.length > 1) { + const stackTrace = new Error().stack; + + if( + !stackTrace.includes('LGraphNode.prototype.connect') && // for touch device + !stackTrace.includes('LGraphNode.connect') && // for mouse device + !stackTrace.includes('loadGraphData')) { + if(this.outputs[link_info.origin_slot].links.length == 0) + this.removeOutput(link_info.origin_slot); + } + } + + let slot_i = 1; + for (let i = 0; i < this.outputs.length; i++) { + this.outputs[i].name = `output${slot_i}` + slot_i++; + } + + let last_slot = this.outputs[this.outputs.length - 1]; + if (last_slot.slot_index == link_info.origin_slot) { + this.addOutput(`output${slot_i}`, this.outputs[0].type); + } + + let select_slot = this.inputs.find(x => x.name == "select"); + if(this.widgets) { + this.widgets[0].options.max = select_slot?this.outputs.length-1:this.outputs.length; + this.widgets[0].value = Math.min(this.widgets[0].value, this.widgets[0].options.max); + if(this.widgets[0].options.max > 0 && this.widgets[0].value == 0) + this.widgets[0].value = 1; + } + } + } + + if (nodeData.name === 'ImpactMakeImageList' || nodeData.name === 'ImpactMakeImageBatch' || + nodeData.name === 'CombineRegionalPrompts' || + nodeData.name === 'ImpactCombineConditionings' || nodeData.name === 'ImpactConcatConditionings' || + nodeData.name === 'ImpactSEGSConcat' || + nodeData.name === 'ImpactSwitch' || nodeData.name === 'LatentSwitch' || nodeData.name == 'SEGSSwitch') { + var input_name = "input"; + + switch(nodeData.name) { + case 'ImpactMakeImageList': + case 'ImpactMakeImageBatch': + input_name = "image"; + break; + + case 'ImpactSEGSConcat': + input_name = "segs"; + break; + + case 'CombineRegionalPrompts': + input_name = "regional_prompts"; + break; + + case 'ImpactCombineConditionings': + case 'ImpactConcatConditionings': + input_name = "conditioning"; + break; + + case 'LatentSwitch': + input_name = "input"; + break; + + case 'SEGSSwitch': + input_name = "input"; + break; + + case 'ImpactSwitch': + input_name = "input"; + } + + const onConnectionsChange = nodeType.prototype.onConnectionsChange; + nodeType.prototype.onConnectionsChange = function (type, index, connected, link_info) { + if(!link_info) + return; + + if(type == 2) { + // connect output + if(connected && index == 0){ + if(nodeData.name == 'ImpactSwitch' && app.graph._nodes_by_id[link_info.target_id]?.type == 'Reroute') { + app.graph._nodes_by_id[link_info.target_id].disconnectInput(link_info.target_slot); + } + + if(this.outputs[0].type == '*'){ + if(link_info.type == '*') { + app.graph._nodes_by_id[link_info.target_id].disconnectInput(link_info.target_slot); + } + else { + // propagate type + this.outputs[0].type = link_info.type; + this.outputs[0].label = link_info.type; + this.outputs[0].name = link_info.type; + + for(let i in this.inputs) { + let input_i = this.inputs[i]; + if(input_i.name != 'select' && input_i.name != 'sel_mode') + input_i.type = link_info.type; + } + } + } + } + + return; + } + else { + if(nodeData.name == 'ImpactSwitch' && app.graph._nodes_by_id[link_info.origin_id].type == 'Reroute') + this.disconnectInput(link_info.target_slot); + + // connect input + if(this.inputs[index].name == 'select' || this.inputs[index].name == 'sel_mode') + return; + + if(this.inputs[0].type == '*'){ + const node = app.graph.getNodeById(link_info.origin_id); + let origin_type = node.outputs[link_info.origin_slot].type; + + if(origin_type == '*') { + this.disconnectInput(link_info.target_slot); + return; + } + + for(let i in this.inputs) { + let input_i = this.inputs[i]; + if(input_i.name != 'select' && input_i.name != 'sel_mode') + input_i.type = origin_type; + } + + this.outputs[0].type = origin_type; + this.outputs[0].label = origin_type; + this.outputs[0].name = origin_type; + } + } + + let select_slot = this.inputs.find(x => x.name == "select"); + let mode_slot = this.inputs.find(x => x.name == "sel_mode"); + + let converted_count = 0; + converted_count += select_slot?1:0; + converted_count += mode_slot?1:0; + + if (!connected && (this.inputs.length > 1+converted_count)) { + const stackTrace = new Error().stack; + + if( + !stackTrace.includes('LGraphNode.prototype.connect') && // for touch device + !stackTrace.includes('LGraphNode.connect') && // for mouse device + !stackTrace.includes('loadGraphData') && + this.inputs[index].name != 'select') { + this.removeInput(index); + } + } + + let slot_i = 1; + for (let i = 0; i < this.inputs.length; i++) { + let input_i = this.inputs[i]; + if(input_i.name != 'select'&& input_i.name != 'sel_mode') { + input_i.name = `${input_name}${slot_i}` + slot_i++; + } + } + + let last_slot = this.inputs[this.inputs.length - 1]; + if ( + (last_slot.name == 'select' && last_slot.name != 'sel_mode' && this.inputs[this.inputs.length - 2].link != undefined) + || (last_slot.name != 'select' && last_slot.name != 'sel_mode' && last_slot.link != undefined)) { + this.addInput(`${input_name}${slot_i}`, this.outputs[0].type); + } + + if(this.widgets) { + this.widgets[0].options.max = select_slot?this.inputs.length-1:this.inputs.length; + this.widgets[0].value = Math.min(this.widgets[0].value, this.widgets[0].options.max); + if(this.widgets[0].options.max > 0 && this.widgets[0].value == 0) + this.widgets[0].value = 1; + } + } + } + }, + + nodeCreated(node, app) { + if(node.comfyClass == "MaskPainter") { + node.addWidget("button", "Edit mask", null, () => { + ComfyApp.copyToClipspace(node); + ComfyApp.clipspace_return_node = node; + ComfyApp.open_maskeditor(); + }); + } + + switch(node.comfyClass) { + case "ToDetailerPipe": + case "ToDetailerPipeSDXL": + case "BasicPipeToDetailerPipe": + case "BasicPipeToDetailerPipeSDXL": + case "EditDetailerPipe": + case "FaceDetailer": + case "DetailerForEach": + case "DetailerForEachDebug": + case "DetailerForEachPipe": + case "DetailerForEachDebugPipe": + { + for(let i in node.widgets) { + let widget = node.widgets[i]; + if(widget.type === "customtext") { + widget.dynamicPrompts = false; + widget.inputEl.placeholder = "wildcard spec: if kept empty, this option will be ignored"; + widget.serializeValue = () => { + return node.widgets[i].value; + }; + } + } + } + break; + } + + if(node.comfyClass == "ImpactSEGSLabelFilter" || node.comfyClass == "SEGSLabelFilterDetailerHookProvider") { + Object.defineProperty(node.widgets[0], "value", { + set: (value) => { + const stackTrace = new Error().stack; + if(stackTrace.includes('inner_value_change')) { + if(node.widgets[1].value.trim() != "" && !node.widgets[1].value.trim().endsWith(",")) + node.widgets[1].value += ", " + + node.widgets[1].value += value; + node.widgets_values[1] = node.widgets[1].value; + } + + node._value = value; + }, + get: () => { + return node._value; + } + }); + } + + if(node.comfyClass == "UltralyticsDetectorProvider") { + let model_name_widget = node.widgets.find((w) => w.name === "model_name"); + let orig_draw = node.onDrawForeground; + node.onDrawForeground = function (ctx) { + const r = orig_draw?.apply?.(this, arguments); + + let is_seg = model_name_widget.value.startsWith('segm/') || model_name_widget.value.includes('-seg'); + if(!is_seg) { + var slot_pos = new Float32Array(2); + var pos = node.getConnectionPos(false, 1, slot_pos); + + pos[0] -= node.pos[0] - 10; + pos[1] -= node.pos[1]; + + ctx.beginPath(); + ctx.strokeStyle = "red"; + ctx.lineWidth = 4; + ctx.moveTo(pos[0] - 5, pos[1] - 5); + ctx.lineTo(pos[0] + 5, pos[1] + 5); + ctx.moveTo(pos[0] + 5, pos[1] - 5); + ctx.lineTo(pos[0] - 5, pos[1] + 5); + ctx.stroke(); + } + } + } + + if( + node.comfyClass == "ImpactWildcardEncode" || node.comfyClass == "ImpactWildcardProcessor" + || node.comfyClass == "ToDetailerPipe" || node.comfyClass == "ToDetailerPipeSDXL" + || node.comfyClass == "EditDetailerPipe" || node.comfyClass == "EditDetailerPipeSDXL" + || node.comfyClass == "BasicPipeToDetailerPipe" || node.comfyClass == "BasicPipeToDetailerPipeSDXL") { + node._value = "Select the LoRA to add to the text"; + node._wvalue = "Select the Wildcard to add to the text"; + + var tbox_id = 0; + var combo_id = 3; + var has_lora = true; + + switch(node.comfyClass){ + case "ImpactWildcardEncode": + tbox_id = 0; + combo_id = 3; + break; + + case "ImpactWildcardProcessor": + tbox_id = 0; + combo_id = 4; + has_lora = false; + break; + + case "ToDetailerPipe": + case "ToDetailerPipeSDXL": + case "EditDetailerPipe": + case "EditDetailerPipeSDXL": + case "BasicPipeToDetailerPipe": + case "BasicPipeToDetailerPipeSDXL": + tbox_id = 0; + combo_id = 1; + break; + } + + Object.defineProperty(node.widgets[combo_id+1], "value", { + set: (value) => { + const stackTrace = new Error().stack; + if(stackTrace.includes('inner_value_change')) { + if(value != "Select the Wildcard to add to the text") { + if(node.widgets[tbox_id].value != '') + node.widgets[tbox_id].value += ', ' + + node.widgets[tbox_id].value += value; + } + } + }, + get: () => { return "Select the Wildcard to add to the text"; } + }); + + Object.defineProperty(node.widgets[combo_id+1].options, "values", { + set: (x) => {}, + get: () => { + return wildcards_list; + } + }); + + if(has_lora) { + Object.defineProperty(node.widgets[combo_id], "value", { + set: (value) => { + const stackTrace = new Error().stack; + if(stackTrace.includes('inner_value_change')) { + if(value != "Select the LoRA to add to the text") { + let lora_name = value; + if (lora_name.endsWith('.safetensors')) { + lora_name = lora_name.slice(0, -12); + } + + node.widgets[tbox_id].value += ``; + if(node.widgets_values) { + node.widgets_values[tbox_id] = node.widgets[tbox_id].value; + } + } + } + + node._value = value; + }, + + get: () => { return "Select the LoRA to add to the text"; } + }); + } + + // Preventing validation errors from occurring in any situation. + if(has_lora) { + node.widgets[combo_id].serializeValue = () => { return "Select the LoRA to add to the text"; } + } + node.widgets[combo_id+1].serializeValue = () => { return "Select the Wildcard to add to the text"; } + } + + if(node.comfyClass == "ImpactWildcardProcessor" || node.comfyClass == "ImpactWildcardEncode") { + node.widgets[0].inputEl.placeholder = "Wildcard Prompt (User input)"; + node.widgets[1].inputEl.placeholder = "Populated Prompt (Will be generated automatically)"; + node.widgets[1].inputEl.disabled = true; + + const populated_text_widget = node.widgets.find((w) => w.name == 'populated_text'); + const mode_widget = node.widgets.find((w) => w.name == 'mode'); + + // mode combo + Object.defineProperty(mode_widget, "value", { + set: (value) => { + node._mode_value = value == true || value == "Populate"; + populated_text_widget.inputEl.disabled = value == true || value == "Populate"; + }, + get: () => { + if(node._mode_value != undefined) + return node._mode_value; + else + return true; + } + }); + } + + if (node.comfyClass == "MaskPainter") { + node.widgets[0].value = '#placeholder'; + + Object.defineProperty(node, "images", { + set: function(value) { + node._images = value; + }, + get: function() { + const id = node.id+""; + if(node.widgets[0].value != '#placeholder') { + var need_invalidate = false; + + if(input_dirty.hasOwnProperty(id) && input_dirty[id]) { + node.widgets[0].value = {...input_tracking[id][1]}; + input_dirty[id] = false; + need_invalidate = true + this._images = app.nodeOutputs[id].images; + } + + let filename = app.nodeOutputs[id]['aux'][1][0]['filename']; + let subfolder = app.nodeOutputs[id]['aux'][1][0]['subfolder']; + let type = app.nodeOutputs[id]['aux'][1][0]['type']; + + let item = + { + image_hash: app.nodeOutputs[id]['aux'][0], + forward_filename: app.nodeOutputs[id]['aux'][1][0]['filename'], + forward_subfolder: app.nodeOutputs[id]['aux'][1][0]['subfolder'], + forward_type: app.nodeOutputs[id]['aux'][1][0]['type'] + }; + + if(node._images) { + app.nodeOutputs[id].images = [{ + ...node._images[0], + ...item + }]; + + node.widgets[0].value = + { + ...node._images[0], + ...item + }; + } + else { + app.nodeOutputs[id].images = [{ + ...item + }]; + + node.widgets[0].value = + { + ...item + }; + } + + if(need_invalidate) { + Promise.all( + app.nodeOutputs[id].images.map((src) => { + return new Promise((r) => { + const img = new Image(); + img.onload = () => r(img); + img.onerror = () => r(null); + img.src = "/view?" + new URLSearchParams(src).toString(); + }); + }) + ).then((imgs) => { + this.imgs = imgs.filter(Boolean); + this.setSizeForImage?.(); + app.graph.setDirtyCanvas(true); + }); + + app.nodeOutputs[id].images[0] = { ...node.widgets[0].value }; + } + + return app.nodeOutputs[id].images; + } + else { + return node._images; + } + } + }); + } + } +}); diff --git a/custom_nodes/ComfyUI-Impact-Pack/js/impact-sam-editor.js b/custom_nodes/ComfyUI-Impact-Pack/js/impact-sam-editor.js new file mode 100644 index 0000000000000000000000000000000000000000..7e5dbaaf3e3b520a3ef2c7796c0a0269a6ef2bd8 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/js/impact-sam-editor.js @@ -0,0 +1,636 @@ +import { app } from "../../scripts/app.js"; +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { ComfyApp } from "../../scripts/app.js"; +import { ClipspaceDialog } from "../../extensions/core/clipspace.js"; + +function addMenuHandler(nodeType, cb) { + const getOpts = nodeType.prototype.getExtraMenuOptions; + nodeType.prototype.getExtraMenuOptions = function () { + const r = getOpts.apply(this, arguments); + cb.apply(this, arguments); + return r; + }; +} + +// Helper function to convert a data URL to a Blob object +function dataURLToBlob(dataURL) { + const parts = dataURL.split(';base64,'); + const contentType = parts[0].split(':')[1]; + const byteString = atob(parts[1]); + const arrayBuffer = new ArrayBuffer(byteString.length); + const uint8Array = new Uint8Array(arrayBuffer); + for (let i = 0; i < byteString.length; i++) { + uint8Array[i] = byteString.charCodeAt(i); + } + return new Blob([arrayBuffer], { type: contentType }); +} + +function loadedImageToBlob(image) { + const canvas = document.createElement('canvas'); + + canvas.width = image.width; + canvas.height = image.height; + + const ctx = canvas.getContext('2d'); + + ctx.drawImage(image, 0, 0); + + const dataURL = canvas.toDataURL('image/png', 1); + const blob = dataURLToBlob(dataURL); + + return blob; +} + +async function uploadMask(filepath, formData) { + await fetch('/upload/mask', { + method: 'POST', + body: formData + }).then(response => {}).catch(error => { + console.error('Error:', error); + }); + + ComfyApp.clipspace.imgs[ComfyApp.clipspace['selectedIndex']] = new Image(); + ComfyApp.clipspace.imgs[ComfyApp.clipspace['selectedIndex']].src = `view?filename=${filepath.filename}&type=${filepath.type}`; + + if(ComfyApp.clipspace.images) + ComfyApp.clipspace.images[ComfyApp.clipspace['selectedIndex']] = filepath; + + ClipspaceDialog.invalidatePreview(); +} + +class ImpactSamEditorDialog extends ComfyDialog { + static instance = null; + + static getInstance() { + if(!ImpactSamEditorDialog.instance) { + ImpactSamEditorDialog.instance = new ImpactSamEditorDialog(); + } + + return ImpactSamEditorDialog.instance; + } + + constructor() { + super(); + this.element = $el("div.comfy-modal", { parent: document.body }, + [ $el("div.comfy-modal-content", + [...this.createButtons()]), + ]); + } + + createButtons() { + return []; + } + + createButton(name, callback) { + var button = document.createElement("button"); + button.innerText = name; + button.addEventListener("click", callback); + return button; + } + + createLeftButton(name, callback) { + var button = this.createButton(name, callback); + button.style.cssFloat = "left"; + button.style.marginRight = "4px"; + return button; + } + + createRightButton(name, callback) { + var button = this.createButton(name, callback); + button.style.cssFloat = "right"; + button.style.marginLeft = "4px"; + return button; + } + + createLeftSlider(self, name, callback) { + const divElement = document.createElement('div'); + divElement.id = "sam-confidence-slider"; + divElement.style.cssFloat = "left"; + divElement.style.fontFamily = "sans-serif"; + divElement.style.marginRight = "4px"; + divElement.style.color = "var(--input-text)"; + divElement.style.backgroundColor = "var(--comfy-input-bg)"; + divElement.style.borderRadius = "8px"; + divElement.style.borderColor = "var(--border-color)"; + divElement.style.borderStyle = "solid"; + divElement.style.fontSize = "15px"; + divElement.style.height = "21px"; + divElement.style.padding = "1px 6px"; + divElement.style.display = "flex"; + divElement.style.position = "relative"; + divElement.style.top = "2px"; + self.confidence_slider_input = document.createElement('input'); + self.confidence_slider_input.setAttribute('type', 'range'); + self.confidence_slider_input.setAttribute('min', '0'); + self.confidence_slider_input.setAttribute('max', '100'); + self.confidence_slider_input.setAttribute('value', '70'); + const labelElement = document.createElement("label"); + labelElement.textContent = name; + + divElement.appendChild(labelElement); + divElement.appendChild(self.confidence_slider_input); + + self.confidence_slider_input.addEventListener("change", callback); + + return divElement; + } + + async detect_and_invalidate_mask_canvas(self) { + const mask_img = await self.detect(self); + + const canvas = self.maskCtx.canvas; + const ctx = self.maskCtx; + + ctx.clearRect(0, 0, canvas.width, canvas.height); + + await new Promise((resolve, reject) => { + self.mask_image = new Image(); + self.mask_image.onload = function() { + ctx.drawImage(self.mask_image, 0, 0, canvas.width, canvas.height); + resolve(); + }; + self.mask_image.onerror = reject; + self.mask_image.src = mask_img.src; + }); + } + + setlayout(imgCanvas, maskCanvas, pointsCanvas) { + const self = this; + + // If it is specified as relative, using it only as a hidden placeholder for padding is recommended + // to prevent anomalies where it exceeds a certain size and goes outside of the window. + var placeholder = document.createElement("div"); + placeholder.style.position = "relative"; + placeholder.style.height = "50px"; + + var bottom_panel = document.createElement("div"); + bottom_panel.style.position = "absolute"; + bottom_panel.style.bottom = "0px"; + bottom_panel.style.left = "20px"; + bottom_panel.style.right = "20px"; + bottom_panel.style.height = "50px"; + + var brush = document.createElement("div"); + brush.id = "sam-brush"; + brush.style.backgroundColor = "blue"; + brush.style.outline = "2px solid pink"; + brush.style.borderRadius = "50%"; + brush.style.MozBorderRadius = "50%"; + brush.style.WebkitBorderRadius = "50%"; + brush.style.position = "absolute"; + brush.style.zIndex = 100; + brush.style.pointerEvents = "none"; + this.brush = brush; + this.element.appendChild(imgCanvas); + this.element.appendChild(maskCanvas); + this.element.appendChild(pointsCanvas); + this.element.appendChild(placeholder); // must below z-index than bottom_panel to avoid covering button + this.element.appendChild(bottom_panel); + document.body.appendChild(brush); + this.brush_size = 5; + + var confidence_slider = this.createLeftSlider(self, "Confidence", (event) => { + self.confidence = event.target.value; + }); + + var clearButton = this.createLeftButton("Clear", () => { + self.maskCtx.clearRect(0, 0, self.maskCanvas.width, self.maskCanvas.height); + self.pointsCtx.clearRect(0, 0, self.pointsCanvas.width, self.pointsCanvas.height); + + self.prompt_points = []; + + self.invalidatePointsCanvas(self); + }); + + var detectButton = this.createLeftButton("Detect", () => self.detect_and_invalidate_mask_canvas(self)); + + var cancelButton = this.createRightButton("Cancel", () => { + document.removeEventListener("mouseup", ImpactSamEditorDialog.handleMouseUp); + document.removeEventListener("keydown", ImpactSamEditorDialog.handleKeyDown); + self.close(); + }); + + self.saveButton = this.createRightButton("Save", () => { + document.removeEventListener("mouseup", ImpactSamEditorDialog.handleMouseUp); + document.removeEventListener("keydown", ImpactSamEditorDialog.handleKeyDown); + self.save(self); + }); + + var undoButton = this.createLeftButton("Undo", () => { + if(self.prompt_points.length > 0) { + self.prompt_points.pop(); + self.pointsCtx.clearRect(0, 0, self.pointsCanvas.width, self.pointsCanvas.height); + self.invalidatePointsCanvas(self); + } + }); + + bottom_panel.appendChild(clearButton); + bottom_panel.appendChild(detectButton); + bottom_panel.appendChild(self.saveButton); + bottom_panel.appendChild(cancelButton); + bottom_panel.appendChild(confidence_slider); + bottom_panel.appendChild(undoButton); + + imgCanvas.style.position = "relative"; + imgCanvas.style.top = "200"; + imgCanvas.style.left = "0"; + + maskCanvas.style.position = "absolute"; + maskCanvas.style.opacity = 0.5; + pointsCanvas.style.position = "absolute"; + } + + show() { + this.mask_image = null; + self.prompt_points = []; + + this.message_box = $el("p", ["Please wait a moment while the SAM model and the image are being loaded."]); + this.element.appendChild(this.message_box); + + if(self.imgCtx) { + self.imgCtx.clearRect(0, 0, self.imageCanvas.width, self.imageCanvas.height); + } + + const target_image_path = ComfyApp.clipspace.imgs[ComfyApp.clipspace['selectedIndex']].src; + this.load_sam(target_image_path); + + if(!this.is_layout_created) { + // layout + const imgCanvas = document.createElement('canvas'); + const maskCanvas = document.createElement('canvas'); + const pointsCanvas = document.createElement('canvas'); + + imgCanvas.id = "imageCanvas"; + maskCanvas.id = "maskCanvas"; + pointsCanvas.id = "pointsCanvas"; + + this.setlayout(imgCanvas, maskCanvas, pointsCanvas); + + // prepare content + this.imgCanvas = imgCanvas; + this.maskCanvas = maskCanvas; + this.pointsCanvas = pointsCanvas; + this.maskCtx = maskCanvas.getContext('2d'); + this.pointsCtx = pointsCanvas.getContext('2d'); + + this.is_layout_created = true; + + // replacement of onClose hook since close is not real close + const self = this; + const observer = new MutationObserver(function(mutations) { + mutations.forEach(function(mutation) { + if (mutation.type === 'attributes' && mutation.attributeName === 'style') { + if(self.last_display_style && self.last_display_style != 'none' && self.element.style.display == 'none') { + ComfyApp.onClipspaceEditorClosed(); + } + + self.last_display_style = self.element.style.display; + } + }); + }); + + const config = { attributes: true }; + observer.observe(this.element, config); + } + + this.setImages(target_image_path, this.imgCanvas, this.pointsCanvas); + + if(ComfyApp.clipspace_return_node) { + this.saveButton.innerText = "Save to node"; + } + else { + this.saveButton.innerText = "Save"; + } + this.saveButton.disabled = true; + + this.element.style.display = "block"; + this.element.style.zIndex = 8888; // NOTE: alert dialog must be high priority. + } + + updateBrushPreview(self, event) { + event.preventDefault(); + + const centerX = event.pageX; + const centerY = event.pageY; + + const brush = self.brush; + + brush.style.width = self.brush_size * 2 + "px"; + brush.style.height = self.brush_size * 2 + "px"; + brush.style.left = (centerX - self.brush_size) + "px"; + brush.style.top = (centerY - self.brush_size) + "px"; + } + + setImages(target_image_path, imgCanvas, pointsCanvas) { + const imgCtx = imgCanvas.getContext('2d'); + const maskCtx = this.maskCtx; + const maskCanvas = this.maskCanvas; + + const self = this; + + // image load + const orig_image = new Image(); + window.addEventListener("resize", () => { + // repositioning + imgCanvas.width = window.innerWidth - 250; + imgCanvas.height = window.innerHeight - 200; + + // redraw image + let drawWidth = orig_image.width; + let drawHeight = orig_image.height; + + if (orig_image.width > imgCanvas.width) { + drawWidth = imgCanvas.width; + drawHeight = (drawWidth / orig_image.width) * orig_image.height; + } + + if (drawHeight > imgCanvas.height) { + drawHeight = imgCanvas.height; + drawWidth = (drawHeight / orig_image.height) * orig_image.width; + } + + imgCtx.drawImage(orig_image, 0, 0, drawWidth, drawHeight); + + // update mask + pointsCanvas.width = drawWidth; + pointsCanvas.height = drawHeight; + pointsCanvas.style.top = imgCanvas.offsetTop + "px"; + pointsCanvas.style.left = imgCanvas.offsetLeft + "px"; + + maskCanvas.width = drawWidth; + maskCanvas.height = drawHeight; + maskCanvas.style.top = imgCanvas.offsetTop + "px"; + maskCanvas.style.left = imgCanvas.offsetLeft + "px"; + + self.invalidateMaskCanvas(self); + self.invalidatePointsCanvas(self); + }); + + // original image load + orig_image.onload = () => self.onLoaded(self); + const rgb_url = new URL(target_image_path); + rgb_url.searchParams.delete('channel'); + rgb_url.searchParams.set('channel', 'rgb'); + orig_image.src = rgb_url; + self.image = orig_image; + } + + onLoaded(self) { + if(self.message_box) { + self.element.removeChild(self.message_box); + self.message_box = null; + } + + window.dispatchEvent(new Event('resize')); + + self.setEventHandler(pointsCanvas); + self.saveButton.disabled = false; + } + + setEventHandler(targetCanvas) { + targetCanvas.addEventListener("contextmenu", (event) => { + event.preventDefault(); + }); + + const self = this; + targetCanvas.addEventListener('pointermove', (event) => this.updateBrushPreview(self,event)); + targetCanvas.addEventListener('pointerdown', (event) => this.handlePointerDown(self,event)); + targetCanvas.addEventListener('pointerover', (event) => { this.brush.style.display = "block"; }); + targetCanvas.addEventListener('pointerleave', (event) => { this.brush.style.display = "none"; }); + document.addEventListener('keydown', ImpactSamEditorDialog.handleKeyDown); + } + + static handleKeyDown(event) { + const self = ImpactSamEditorDialog.instance; + if (event.key === '=') { // positive + brush.style.backgroundColor = "blue"; + brush.style.outline = "2px solid pink"; + self.is_positive_mode = true; + } else if (event.key === '-') { // negative + brush.style.backgroundColor = "red"; + brush.style.outline = "2px solid skyblue"; + self.is_positive_mode = false; + } + } + + is_positive_mode = true; + prompt_points = []; + confidence = 70; + + invalidatePointsCanvas(self) { + const ctx = self.pointsCtx; + + for (const i in self.prompt_points) { + const [is_positive, x, y] = self.prompt_points[i]; + + const scaledX = x * ctx.canvas.width / self.image.width; + const scaledY = y * ctx.canvas.height / self.image.height; + + if(is_positive) + ctx.fillStyle = "blue"; + else + ctx.fillStyle = "red"; + ctx.beginPath(); + ctx.arc(scaledX, scaledY, 3, 0, 3 * Math.PI); + ctx.fill(); + } + }줘 + + invalidateMaskCanvas(self) { + if(self.mask_image) { + self.maskCtx.clearRect(0, 0, self.maskCanvas.width, self.maskCanvas.height); + self.maskCtx.drawImage(self.mask_image, 0, 0, self.maskCanvas.width, self.maskCanvas.height); + } + } + + async load_sam(url) { + const parsedUrl = new URL(url); + const searchParams = new URLSearchParams(parsedUrl.search); + + const filename = searchParams.get("filename") || ""; + const fileType = searchParams.get("type") || ""; + const subfolder = searchParams.get("subfolder") || ""; + + const data = { + sam_model_name: "auto", + filename: filename, + type: fileType, + subfolder: subfolder + }; + + fetch('/sam/prepare', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(data) + }); + } + + async detect(self) { + const positive_points = []; + const negative_points = []; + + for(const i in self.prompt_points) { + const [is_positive, x, y] = self.prompt_points[i]; + const point = [x,y]; + if(is_positive) + positive_points.push(point); + else + negative_points.push(point); + } + + const data = { + positive_points: positive_points, + negative_points: negative_points, + threshold: self.confidence/100 + }; + + const response = await fetch('/sam/detect', { + method: 'POST', + headers: { 'Content-Type': 'image/png' }, + body: JSON.stringify(data) + }); + + const blob = await response.blob(); + const url = URL.createObjectURL(blob); + + return new Promise((resolve, reject) => { + const image = new Image(); + image.onload = () => resolve(image); + image.onerror = reject; + image.src = url; + }); + } + + handlePointerDown(self, event) { + if ([0, 2, 5].includes(event.button)) { + event.preventDefault(); + const x = event.offsetX || event.targetTouches[0].clientX - maskRect.left; + const y = event.offsetY || event.targetTouches[0].clientY - maskRect.top; + + const originalX = x * self.image.width / self.pointsCanvas.width; + const originalY = y * self.image.height / self.pointsCanvas.height; + + var point = null; + if (event.button == 0) { + // positive + point = [true, originalX, originalY]; + } else { + // negative + point = [false, originalX, originalY]; + } + + self.prompt_points.push(point); + + self.invalidatePointsCanvas(self); + } + } + + async save(self) { + if(!self.mask_image) { + this.close(); + return; + } + + const save_canvas = document.createElement('canvas'); + + const save_ctx = save_canvas.getContext('2d', {willReadFrequently:true}); + save_canvas.width = self.mask_image.width; + save_canvas.height = self.mask_image.height; + + save_ctx.drawImage(self.mask_image, 0, 0, save_canvas.width, save_canvas.height); + + const save_data = save_ctx.getImageData(0, 0, save_canvas.width, save_canvas.height); + + // refine mask image + for (let i = 0; i < save_data.data.length; i += 4) { + if(save_data.data[i]) { + save_data.data[i+3] = 0; + } + else { + save_data.data[i+3] = 255; + } + + save_data.data[i] = 0; + save_data.data[i+1] = 0; + save_data.data[i+2] = 0; + } + + save_ctx.globalCompositeOperation = 'source-over'; + save_ctx.putImageData(save_data, 0, 0); + + const formData = new FormData(); + const filename = "clipspace-mask-" + performance.now() + ".png"; + + const item = + { + "filename": filename, + "subfolder": "", + "type": "temp", + }; + + if(ComfyApp.clipspace.images) + ComfyApp.clipspace.images[0] = item; + + if(ComfyApp.clipspace.widgets) { + const index = ComfyApp.clipspace.widgets.findIndex(obj => obj.name === 'image'); + + if(index >= 0) + ComfyApp.clipspace.widgets[index].value = `${filename} [temp]`; + } + + const dataURL = save_canvas.toDataURL(); + const blob = dataURLToBlob(dataURL); + + let original_url = new URL(this.image.src); + + const original_ref = { filename: original_url.searchParams.get('filename') }; + + let original_subfolder = original_url.searchParams.get("subfolder"); + if(original_subfolder) + original_ref.subfolder = original_subfolder; + + let original_type = original_url.searchParams.get("type"); + if(original_type) + original_ref.type = original_type; + + formData.append('image', blob, filename); + formData.append('original_ref', JSON.stringify(original_ref)); + formData.append('type', "temp"); + + await uploadMask(item, formData); + ComfyApp.onClipspaceEditorSave(); + this.close(); + } +} + +app.registerExtension({ + name: "Comfy.Impact.SAMEditor", + init(app) { + const callback = + function () { + let dlg = ImpactSamEditorDialog.getInstance(); + dlg.show(); + }; + + const context_predicate = () => ComfyApp.clipspace && ComfyApp.clipspace.imgs && ComfyApp.clipspace.imgs.length > 0 + ClipspaceDialog.registerButton("Impact SAM Detector", context_predicate, callback); + }, + + async beforeRegisterNodeDef(nodeType, nodeData, app) { + if (Array.isArray(nodeData.output) && (nodeData.output.includes("MASK") || nodeData.output.includes("IMAGE"))) { + addMenuHandler(nodeType, function (_, options) { + options.unshift({ + content: "Open in SAM Detector", + callback: () => { + ComfyApp.copyToClipspace(this); + ComfyApp.clipspace_return_node = this; + + let dlg = ImpactSamEditorDialog.getInstance(); + dlg.show(); + }, + }); + }); + } + } +}); + diff --git a/custom_nodes/ComfyUI-Impact-Pack/js/impact-segs-picker.js b/custom_nodes/ComfyUI-Impact-Pack/js/impact-segs-picker.js new file mode 100644 index 0000000000000000000000000000000000000000..01319f072923294d9a531aa296435ffa78eafe2a --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/js/impact-segs-picker.js @@ -0,0 +1,182 @@ +import { ComfyApp, app } from "../../scripts/app.js"; +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { api } from "../../scripts/api.js"; + +async function open_picker(node) { + const resp = await api.fetchApi(`/impact/segs/picker/count?id=${node.id}`); + const body = await resp.text(); + + let cnt = parseInt(body); + + var existingPicker = document.getElementById('impact-picker'); + if (existingPicker) { + existingPicker.parentNode.removeChild(existingPicker); + } + + var gallery = document.createElement('div'); + gallery.id = 'impact-picker'; + + gallery.style.position = "absolute"; + gallery.style.height = "80%"; + gallery.style.width = "80%"; + gallery.style.top = "10%"; + gallery.style.left = "10%"; + gallery.style.display = 'flex'; + gallery.style.flexWrap = 'wrap'; + gallery.style.maxHeight = '600px'; + gallery.style.overflow = 'auto'; + gallery.style.backgroundColor = 'rgba(0,0,0,0.3)'; + gallery.style.padding = '20px'; + gallery.draggable = false; + gallery.style.zIndex = 5000; + + var doneButton = document.createElement('button'); + doneButton.textContent = 'Done'; + doneButton.style.padding = '10px 10px'; + doneButton.style.border = 'none'; + doneButton.style.borderRadius = '5px'; + doneButton.style.fontFamily = 'Arial, sans-serif'; + doneButton.style.fontSize = '16px'; + doneButton.style.fontWeight = 'bold'; + doneButton.style.color = '#fff'; + doneButton.style.background = 'linear-gradient(to bottom, #0070B8, #003D66)'; + doneButton.style.boxShadow = '0 2px 4px rgba(0, 0, 0, 0.4)'; + doneButton.style.margin = "20px"; + doneButton.style.height = "40px"; + + var cancelButton = document.createElement('button'); + cancelButton.textContent = 'Cancel'; + cancelButton.style.padding = '10px 10px'; + cancelButton.style.border = 'none'; + cancelButton.style.borderRadius = '5px'; + cancelButton.style.fontFamily = 'Arial, sans-serif'; + cancelButton.style.fontSize = '16px'; + cancelButton.style.fontWeight = 'bold'; + cancelButton.style.color = '#fff'; + cancelButton.style.background = 'linear-gradient(to bottom, #ff70B8, #ff3D66)'; + cancelButton.style.boxShadow = '0 2px 4px rgba(0, 0, 0, 0.4)'; + cancelButton.style.margin = "20px"; + cancelButton.style.height = "40px"; + + const w = node.widgets.find((w) => w.name == 'picks'); + let prev_selected = w.value.split(',').map(function(item) { + return parseInt(item, 10); + }); + + let images = []; + doneButton.onclick = () => { + var result = ''; + for(let i in images) { + if(images[i].isSelected) { + if(result != '') + result += ', '; + + result += (parseInt(i)+1); + } + } + + w.value = result; + + gallery.parentNode.removeChild(gallery); + } + + cancelButton.onclick = () => { + gallery.parentNode.removeChild(gallery); + } + + var panel = document.createElement('div'); + panel.style.clear = 'both'; + panel.style.width = '100%'; + panel.style.height = '40px'; + panel.style.justifyContent = 'center'; + panel.style.alignItems = 'center'; + panel.style.display = 'flex'; + panel.appendChild(doneButton); + panel.appendChild(cancelButton); + gallery.appendChild(panel); + + var hint = document.createElement('label'); + hint.style.position = 'absolute'; + hint.innerHTML = 'Click: Toggle Selection
Ctrl-click: Single Selection'; + gallery.appendChild(hint); + + let max_size = 300; + + for(let i=0; i image.naturalHeight) { + ratio = max_size/image.naturalWidth; + } + else { + ratio = max_size/image.naturalHeight; + } + + let width = image.naturalWidth * ratio; + let height = image.naturalHeight * ratio; + + if(width < height) { + this.style.marginLeft = (200-width)/2+"px"; + } + else{ + this.style.marginTop = (200-height)/2+"px"; + } + + this.style.width = width+"px"; + this.style.height = height+"px"; + this.style.objectFit = 'cover'; + } + + image.addEventListener('click', function(event) { + if(event.ctrlKey) { + for(let i in images) { + if(images[i].isSelected) { + images[i].style.border = 'none'; + images[i].isSelected = false; + } + } + + image.style.border = '2px solid #006699'; + image.isSelected = true; + + return; + } + + if(image.isSelected) { + image.style.border = 'none'; + image.isSelected = false; + } + else { + image.style.border = '2px solid #006699'; + image.isSelected = true; + } + }); + + gallery.appendChild(image); + } + + document.body.appendChild(gallery); +} + + +app.registerExtension({ + name: "Comfy.Impack.Picker", + + nodeCreated(node, app) { + if(node.comfyClass == "ImpactSEGSPicker") { + node.addWidget("button", "pick", "image", () => { + open_picker(node); + }); + } + } +}); \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/latent.png b/custom_nodes/ComfyUI-Impact-Pack/latent.png new file mode 100644 index 0000000000000000000000000000000000000000..19fed324a25a7e1a2252400e7752ce5586742429 Binary files /dev/null and b/custom_nodes/ComfyUI-Impact-Pack/latent.png differ diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/additional_dependencies.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/additional_dependencies.py new file mode 100644 index 0000000000000000000000000000000000000000..799b0b141370a53ca25163f58c011c2db5e22cb6 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/additional_dependencies.py @@ -0,0 +1,12 @@ +import sys +import subprocess + + +def ensure_onnx_package(): + try: + import onnxruntime + except Exception: + if "python_embeded" in sys.executable or "python_embedded" in sys.executable: + subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', 'onnxruntime']) + else: + subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', 'onnxruntime']) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/animatediff_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/animatediff_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..fd7c9c1e8d37c16cc649ef33c5846729ce39746d --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/animatediff_nodes.py @@ -0,0 +1,160 @@ +from nodes import MAX_RESOLUTION +from impact.utils import * +import impact.core as core +from impact.core import SEG +from impact.segs_nodes import SEGSPaste + + +class SEGSDetailerForAnimateDiff: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "image_frames": ("IMAGE", ), + "segs": ("SEGS", ), + "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 768, "min": 64, "max": MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "basic_pipe": ("BASIC_PIPE",), + "refiner_ratio": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0}) + }, + "optional": { + "refiner_basic_pipe_opt": ("BASIC_PIPE",), + # TODO: "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + # TODO: "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("SEGS", "IMAGE") + RETURN_NAMES = ("segs", "cnet_images") + OUTPUT_IS_LIST = (False, True) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + @staticmethod + def do_detail(image_frames, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, basic_pipe, refiner_ratio=None, refiner_basic_pipe_opt=None, inpaint_model=False, noise_mask_feather=0): + + model, clip, vae, positive, negative = basic_pipe + if refiner_basic_pipe_opt is None: + refiner_model, refiner_clip, refiner_positive, refiner_negative = None, None, None, None + else: + refiner_model, refiner_clip, _, refiner_positive, refiner_negative = refiner_basic_pipe_opt + + segs = core.segs_scale_match(segs, image_frames.shape) + + new_segs = [] + cnet_image_list = [] + + for seg in segs[1]: + cropped_image_frames = None + + for image in image_frames: + image = image.unsqueeze(0) + cropped_image = seg.cropped_image if seg.cropped_image is not None else crop_tensor4(image, seg.crop_region) + cropped_image = to_tensor(cropped_image) + if cropped_image_frames is None: + cropped_image_frames = cropped_image + else: + cropped_image_frames = torch.concat((cropped_image_frames, cropped_image), dim=0) + + cropped_image_frames = cropped_image_frames.cpu().numpy() + enhanced_image_tensor, cnet_images = core.enhance_detail_for_animatediff(cropped_image_frames, model, clip, vae, guide_size, guide_size_for, max_size, + seg.bbox, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, seg.cropped_mask, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, + refiner_negative=refiner_negative, control_net_wrapper=seg.control_net_wrapper, + inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + if cnet_images is not None: + cnet_image_list.extend(cnet_images) + + if enhanced_image_tensor is None: + new_cropped_image = cropped_image_frames + else: + new_cropped_image = enhanced_image_tensor.cpu().numpy() + + new_seg = SEG(new_cropped_image, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, None) + new_segs.append(new_seg) + + return (segs[0], new_segs), cnet_image_list + + def doit(self, image_frames, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, basic_pipe, refiner_ratio=None, refiner_basic_pipe_opt=None, inpaint_model=False, noise_mask_feather=0): + + segs, cnet_images = SEGSDetailerForAnimateDiff.do_detail(image_frames, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, + scheduler, denoise, basic_pipe, refiner_ratio, refiner_basic_pipe_opt, + inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + if len(cnet_images) == 0: + cnet_images = [empty_pil_tensor()] + + return (segs, cnet_images) + + +class DetailerForEachPipeForAnimateDiff: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "image_frames": ("IMAGE", ), + "segs": ("SEGS", ), + "guide_size": ("FLOAT", {"default": 384, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 1024, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "basic_pipe": ("BASIC_PIPE", ), + "refiner_ratio": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0}), + }, + "optional": { + "detailer_hook": ("DETAILER_HOOK",), + "refiner_basic_pipe_opt": ("BASIC_PIPE",), + # "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + # "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", "SEGS", "BASIC_PIPE", "IMAGE") + RETURN_NAMES = ("image", "segs", "basic_pipe", "cnet_images") + OUTPUT_IS_LIST = (False, False, False, True) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + @staticmethod + def doit(image_frames, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, feather, basic_pipe, refiner_ratio=None, detailer_hook=None, refiner_basic_pipe_opt=None, + inpaint_model=False, noise_mask_feather=0): + + enhanced_segs = [] + cnet_image_list = [] + + for sub_seg in segs[1]: + single_seg = segs[0], [sub_seg] + enhanced_seg, cnet_images = SEGSDetailerForAnimateDiff().do_detail(image_frames, single_seg, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, basic_pipe, refiner_ratio, refiner_basic_pipe_opt, inpaint_model, noise_mask_feather) + + image_frames = SEGSPaste.doit(image_frames, enhanced_seg, feather, alpha=255)[0] + + if cnet_images is not None: + cnet_image_list.extend(cnet_images) + + if detailer_hook is not None: + detailer_hook.post_paste(image_frames) + + enhanced_segs += enhanced_seg[1] + + new_segs = segs[0], enhanced_segs + return image_frames, new_segs, basic_pipe, cnet_image_list diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/bridge_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/bridge_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..038ea84506b5cc21240fefc11461e8d6d937b5e7 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/bridge_nodes.py @@ -0,0 +1,258 @@ +import os +from PIL import ImageOps +from impact.utils import * + +from . import core +import random + +class PreviewBridge: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "images": ("IMAGE",), + "image": ("STRING", {"default": ""}), + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("IMAGE", "MASK", ) + + FUNCTION = "doit" + + OUTPUT_NODE = True + + CATEGORY = "ImpactPack/Util" + + def __init__(self): + super().__init__() + self.output_dir = folder_paths.get_temp_directory() + self.type = "temp" + self.prev_hash = None + + @staticmethod + def load_image(pb_id): + is_fail = False + if pb_id not in core.preview_bridge_image_id_map: + is_fail = True + + image_path, ui_item = core.preview_bridge_image_id_map[pb_id] + + if not os.path.isfile(image_path): + is_fail = True + + if not is_fail: + i = Image.open(image_path) + i = ImageOps.exif_transpose(i) + image = i.convert("RGB") + image = np.array(image).astype(np.float32) / 255.0 + image = torch.from_numpy(image)[None,] + + if 'A' in i.getbands(): + mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0 + mask = 1. - torch.from_numpy(mask) + else: + mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu") + else: + image = empty_pil_tensor() + mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu") + ui_item = { + "filename": 'empty.png', + "subfolder": '', + "type": 'temp' + } + + return image, mask.unsqueeze(0), ui_item + + def doit(self, images, image, unique_id): + need_refresh = False + + if unique_id not in core.preview_bridge_cache: + need_refresh = True + + elif core.preview_bridge_cache[unique_id][0] is not images: + need_refresh = True + + if not need_refresh: + pixels, mask, path_item = PreviewBridge.load_image(image) + image = [path_item] + else: + res = nodes.PreviewImage().save_images(images, filename_prefix="PreviewBridge/PB-") + image2 = res['ui']['images'] + pixels = images + mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu") + + path = os.path.join(folder_paths.get_temp_directory(), 'PreviewBridge', image2[0]['filename']) + core.set_previewbridge_image(unique_id, path, image2[0]) + core.preview_bridge_image_id_map[image] = (path, image2[0]) + core.preview_bridge_image_name_map[unique_id, path] = (image, image2[0]) + core.preview_bridge_cache[unique_id] = (images, image2) + + image = image2 + + return { + "ui": {"images": image}, + "result": (pixels, mask, ), + } + + +def decode_latent(latent_tensor, preview_method, vae_opt=None): + if vae_opt is not None: + image = nodes.VAEDecode().decode(vae_opt, latent_tensor)[0] + return image + + from comfy.cli_args import LatentPreviewMethod + import comfy.latent_formats as latent_formats + + if preview_method.startswith("TAE"): + if preview_method == "TAESD15": + decoder_name = "taesd" + else: + decoder_name = "taesdxl" + + vae = nodes.VAELoader().load_vae(decoder_name)[0] + image = nodes.VAEDecode().decode(vae, latent_tensor)[0] + return image + + else: + if preview_method == "Latent2RGB-SD15": + latent_format = latent_formats.SD15() + method = LatentPreviewMethod.Latent2RGB + else: # preview_method == "Latent2RGB-SDXL" + latent_format = latent_formats.SDXL() + method = LatentPreviewMethod.Latent2RGB + + previewer = core.get_previewer("cpu", latent_format=latent_format, force=True, method=method) + pil_image = previewer.decode_latent_to_preview(latent_tensor['samples']) + pixels_size = pil_image.size[0]*8, pil_image.size[1]*8 + resized_image = pil_image.resize(pixels_size, Image.NONE) + + return to_tensor(resized_image).unsqueeze(0) + + +class PreviewBridgeLatent: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "latent": ("LATENT",), + "image": ("STRING", {"default": ""}), + "preview_method": (["Latent2RGB-SDXL", "Latent2RGB-SD15", "TAESDXL", "TAESD15"],), + }, + "optional": { + "vae_opt": ("VAE", ) + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("LATENT", "MASK", ) + + FUNCTION = "doit" + + OUTPUT_NODE = True + + CATEGORY = "ImpactPack/Util" + + def __init__(self): + super().__init__() + self.output_dir = folder_paths.get_temp_directory() + self.type = "temp" + self.prev_hash = None + self.prefix_append = "_temp_" + ''.join(random.choice("abcdefghijklmnopqrstupvxyz") for x in range(5)) + + @staticmethod + def load_image(pb_id): + is_fail = False + if pb_id not in core.preview_bridge_image_id_map: + is_fail = True + + image_path, ui_item = core.preview_bridge_image_id_map[pb_id] + + if not os.path.isfile(image_path): + is_fail = True + + if not is_fail: + i = Image.open(image_path) + i = ImageOps.exif_transpose(i) + image = i.convert("RGB") + image = np.array(image).astype(np.float32) / 255.0 + image = torch.from_numpy(image)[None,] + + if 'A' in i.getbands(): + mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0 + mask = 1. - torch.from_numpy(mask) + else: + mask = None + else: + image = empty_pil_tensor() + mask = None + ui_item = { + "filename": 'empty.png', + "subfolder": '', + "type": 'temp' + } + + return image, mask, ui_item + + def doit(self, latent, image, preview_method, vae_opt=None, unique_id=None): + need_refresh = False + + if unique_id not in core.preview_bridge_cache: + need_refresh = True + + elif (core.preview_bridge_cache[unique_id][0] is not latent + or (vae_opt is None and core.preview_bridge_cache[unique_id][2] is not None) + or (vae_opt is None and core.preview_bridge_cache[unique_id][1] != preview_method) + or (vae_opt is not None and core.preview_bridge_cache[unique_id][2] is not vae_opt)): + need_refresh = True + + if not need_refresh: + pixels, mask, path_item = PreviewBridge.load_image(image) + + if mask is None: + mask = torch.ones(latent['samples'].shape[2:], dtype=torch.float32, device="cpu").unsqueeze(0) + if 'noise_mask' in latent: + res_latent = latent.copy() + del res_latent['noise_mask'] + else: + res_latent = latent + else: + res_latent = latent.copy() + res_latent['noise_mask'] = mask + + res_image = [path_item] + else: + decoded_image = decode_latent(latent, preview_method, vae_opt) + + if 'noise_mask' in latent: + mask = latent['noise_mask'] + + decoded_pil = to_pil(decoded_image) + + inverted_mask = 1 - mask # invert + resized_mask = resize_mask(inverted_mask, (decoded_image.shape[1], decoded_image.shape[2])) + result_pil = apply_mask_alpha_to_pil(decoded_pil, resized_mask) + + full_output_folder, filename, counter, _, _ = folder_paths.get_save_image_path("PreviewBridge/PBL-"+self.prefix_append, folder_paths.get_temp_directory(), result_pil.size[0], result_pil.size[1]) + file = f"{filename}_{counter}.png" + result_pil.save(os.path.join(full_output_folder, file), compress_level=4) + res_image = [{ + 'filename': file, + 'subfolder': 'PreviewBridge', + 'type': 'temp', + }] + else: + mask = torch.ones(latent['samples'].shape[2:], dtype=torch.float32, device="cpu").unsqueeze(0) + res = nodes.PreviewImage().save_images(decoded_image, filename_prefix="PreviewBridge/PBL-") + res_image = res['ui']['images'] + + path = os.path.join(folder_paths.get_temp_directory(), 'PreviewBridge', res_image[0]['filename']) + core.set_previewbridge_image(unique_id, path, res_image[0]) + core.preview_bridge_image_id_map[image] = (path, res_image[0]) + core.preview_bridge_image_name_map[unique_id, path] = (image, res_image[0]) + core.preview_bridge_cache[unique_id] = (latent, preview_method, vae_opt, res_image) + + res_latent = latent + + return { + "ui": {"images": res_image}, + "result": (res_latent, mask, ), + } diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/config.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/config.py new file mode 100644 index 0000000000000000000000000000000000000000..b771e9f526309e383b46e63bda1a7bf9253276e8 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/config.py @@ -0,0 +1,67 @@ +import configparser +import os + + +version_code = [4, 78] +version = f"V{version_code[0]}.{version_code[1]}" + (f'.{version_code[2]}' if len(version_code) > 2 else '') + +dependency_version = 20 + +my_path = os.path.dirname(__file__) +old_config_path = os.path.join(my_path, "impact-pack.ini") +config_path = os.path.join(my_path, "..", "..", "impact-pack.ini") +latent_letter_path = os.path.join(my_path, "..", "..", "latent.png") + +MAX_RESOLUTION = 8192 + + +def write_config(): + config = configparser.ConfigParser() + config['default'] = { + 'dependency_version': str(dependency_version), + 'mmdet_skip': str(get_config()['mmdet_skip']), + 'sam_editor_cpu': str(get_config()['sam_editor_cpu']), + 'sam_editor_model': get_config()['sam_editor_model'], + 'custom_wildcards': get_config()['custom_wildcards'], + 'disable_gpu_opencv': get_config()['disable_gpu_opencv'], + } + with open(config_path, 'w') as configfile: + config.write(configfile) + + +def read_config(): + try: + config = configparser.ConfigParser() + config.read(config_path) + default_conf = config['default'] + + return { + 'dependency_version': int(default_conf['dependency_version']), + 'mmdet_skip': default_conf['mmdet_skip'].lower() == 'true' if 'mmdet_skip' in default_conf else True, + 'sam_editor_cpu': default_conf['sam_editor_cpu'].lower() == 'true' if 'sam_editor_cpu' in default_conf else False, + 'sam_editor_model': 'sam_vit_b_01ec64.pth', + 'custom_wildcards': default_conf['custom_wildcards'] if 'custom_wildcards' in default_conf else os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "custom_wildcards")), + 'disable_gpu_opencv': default_conf['disable_gpu_opencv'].lower() == 'true' if 'disable_gpu_opencv' in default_conf else True + } + + except Exception: + return { + 'dependency_version': 0, + 'mmdet_skip': True, + 'sam_editor_cpu': False, + 'sam_editor_model': 'sam_vit_b_01ec64.pth', + 'custom_wildcards': os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "custom_wildcards")), + 'disable_gpu_opencv': True + } + + +cached_config = None + + +def get_config(): + global cached_config + + if cached_config is None: + cached_config = read_config() + + return cached_config diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/core.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/core.py new file mode 100644 index 0000000000000000000000000000000000000000..76b12c71494e2e72d3332fc28f1132dafe15adc3 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/core.py @@ -0,0 +1,1875 @@ +import torch +from segment_anything import SamPredictor + +from impact.utils import * +from collections import namedtuple +import numpy as np +from skimage.measure import label + +import nodes +import comfy_extras.nodes_upscale_model as model_upscale +from server import PromptServer +import comfy +import impact.wildcards as wildcards +import math +import cv2 +import time +from comfy import model_management +from impact import utils +from impact import impact_sampling +from concurrent.futures import ThreadPoolExecutor + + +SEG = namedtuple("SEG", + ['cropped_image', 'cropped_mask', 'confidence', 'crop_region', 'bbox', 'label', 'control_net_wrapper'], + defaults=[None]) + +pb_id_cnt = time.time() +preview_bridge_image_id_map = {} +preview_bridge_image_name_map = {} +preview_bridge_cache = {} + + +def set_previewbridge_image(node_id, file, item): + global pb_id_cnt + + if file in preview_bridge_image_name_map: + pb_id = preview_bridge_image_name_map[node_id, file] + if pb_id.startswith(f"${node_id}"): + return pb_id + + pb_id = f"${node_id}-{pb_id_cnt}" + preview_bridge_image_id_map[pb_id] = (file, item) + preview_bridge_image_name_map[node_id, file] = (pb_id, item) + pb_id_cnt += 1 + + return pb_id + + +def erosion_mask(mask, grow_mask_by): + mask = make_2d_mask(mask) + + w = mask.shape[1] + h = mask.shape[0] + + device = comfy.model_management.get_torch_device() + mask = mask.clone().to(device) + mask2 = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(w, h), mode="bilinear").to(device) + if grow_mask_by == 0: + mask_erosion = mask2 + else: + kernel_tensor = torch.ones((1, 1, grow_mask_by, grow_mask_by)).to(device) + padding = math.ceil((grow_mask_by - 1) / 2) + + mask_erosion = torch.clamp(torch.nn.functional.conv2d(mask2.round(), kernel_tensor, padding=padding), 0, 1) + + return mask_erosion[:, :, :w, :h].round().cpu() + + +class REGIONAL_PROMPT: + def __init__(self, mask, sampler): + mask = make_2d_mask(mask) + + self.mask = mask + self.sampler = sampler + self.mask_erosion = None + self.erosion_factor = None + + def clone_with_sampler(self, sampler): + rp = REGIONAL_PROMPT(self.mask, sampler) + rp.mask_erosion = self.mask_erosion + rp.erosion_factor = self.erosion_factor + return rp + + def get_mask_erosion(self, factor): + if self.mask_erosion is None or self.erosion_factor != factor: + self.mask_erosion = erosion_mask(self.mask, factor) + self.erosion_factor = factor + + return self.mask_erosion + + +class NO_BBOX_DETECTOR: + pass + + +class NO_SEGM_DETECTOR: + pass + + +def create_segmasks(results): + bboxs = results[1] + segms = results[2] + confidence = results[3] + + results = [] + for i in range(len(segms)): + item = (bboxs[i], segms[i].astype(np.float32), confidence[i]) + results.append(item) + return results + + +def gen_detection_hints_from_mask_area(x, y, mask, threshold, use_negative): + mask = make_2d_mask(mask) + + points = [] + plabs = [] + + # minimum sampling step >= 3 + y_step = max(3, int(mask.shape[0] / 20)) + x_step = max(3, int(mask.shape[1] / 20)) + + for i in range(0, len(mask), y_step): + for j in range(0, len(mask[i]), x_step): + if mask[i][j] > threshold: + points.append((x + j, y + i)) + plabs.append(1) + elif use_negative and mask[i][j] == 0: + points.append((x + j, y + i)) + plabs.append(0) + + return points, plabs + + +def gen_negative_hints(w, h, x1, y1, x2, y2): + npoints = [] + nplabs = [] + + # minimum sampling step >= 3 + y_step = max(3, int(w / 20)) + x_step = max(3, int(h / 20)) + + for i in range(10, h - 10, y_step): + for j in range(10, w - 10, x_step): + if not (x1 - 10 <= j and j <= x2 + 10 and y1 - 10 <= i and i <= y2 + 10): + npoints.append((j, i)) + nplabs.append(0) + + return npoints, nplabs + + +def enhance_detail(image, model, clip, vae, guide_size, guide_size_for_bbox, max_size, bbox, seed, steps, cfg, + sampler_name, + scheduler, positive, negative, denoise, noise_mask, force_inpaint, + wildcard_opt=None, wildcard_opt_concat_mode=None, + detailer_hook=None, + refiner_ratio=None, refiner_model=None, refiner_clip=None, refiner_positive=None, + refiner_negative=None, control_net_wrapper=None, cycle=1, + inpaint_model=False, noise_mask_feather=0): + + if noise_mask is not None: + noise_mask = utils.tensor_gaussian_blur_mask(noise_mask, noise_mask_feather) + noise_mask = noise_mask.squeeze(3) + + if wildcard_opt is not None and wildcard_opt != "": + model, _, wildcard_positive = wildcards.process_with_loras(wildcard_opt, model, clip) + + if wildcard_opt_concat_mode == "concat": + positive = nodes.ConditioningConcat().concat(positive, wildcard_positive)[0] + else: + positive = wildcard_positive + + h = image.shape[1] + w = image.shape[2] + + bbox_h = bbox[3] - bbox[1] + bbox_w = bbox[2] - bbox[0] + + # Skip processing if the detected bbox is already larger than the guide_size + if not force_inpaint and bbox_h >= guide_size and bbox_w >= guide_size: + print(f"Detailer: segment skip (enough big)") + return None, None + + if guide_size_for_bbox: # == "bbox" + # Scale up based on the smaller dimension between width and height. + upscale = guide_size / min(bbox_w, bbox_h) + else: + # for cropped_size + upscale = guide_size / min(w, h) + + new_w = int(w * upscale) + new_h = int(h * upscale) + + # safeguard + if 'aitemplate_keep_loaded' in model.model_options: + max_size = min(4096, max_size) + + if new_w > max_size or new_h > max_size: + upscale *= max_size / max(new_w, new_h) + new_w = int(w * upscale) + new_h = int(h * upscale) + + if not force_inpaint: + if upscale <= 1.0: + print(f"Detailer: segment skip [determined upscale factor={upscale}]") + return None, None + + if new_w == 0 or new_h == 0: + print(f"Detailer: segment skip [zero size={new_w, new_h}]") + return None, None + else: + if upscale <= 1.0 or new_w == 0 or new_h == 0: + print(f"Detailer: force inpaint") + upscale = 1.0 + new_w = w + new_h = h + + if detailer_hook is not None: + new_w, new_h = detailer_hook.touch_scaled_size(new_w, new_h) + + print(f"Detailer: segment upscale for ({bbox_w, bbox_h}) | crop region {w, h} x {upscale} -> {new_w, new_h}") + + # upscale + upscaled_image = tensor_resize(image, new_w, new_h) + + cnet_pils = None + if control_net_wrapper is not None: + positive, negative, cnet_pils = control_net_wrapper.apply(positive, negative, upscaled_image, noise_mask) + + # prepare mask + if noise_mask is not None and inpaint_model: + positive, negative, latent_image = nodes.InpaintModelConditioning().encode(positive, negative, upscaled_image, vae, noise_mask) + else: + latent_image = to_latent_image(upscaled_image, vae) + if noise_mask is not None: + latent_image['noise_mask'] = noise_mask + + if detailer_hook is not None: + latent_image = detailer_hook.post_encode(latent_image) + + refined_latent = latent_image + + # ksampler + for i in range(0, cycle): + if detailer_hook is not None: + if detailer_hook is not None: + detailer_hook.set_steps((i, cycle)) + + refined_latent = detailer_hook.cycle_latent(refined_latent) + + model2, seed2, steps2, cfg2, sampler_name2, scheduler2, positive2, negative2, upscaled_latent2, denoise2 = \ + detailer_hook.pre_ksample(model, seed+i, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise) + else: + model2, seed2, steps2, cfg2, sampler_name2, scheduler2, positive2, negative2, upscaled_latent2, denoise2 = \ + model, seed + i, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise + + refined_latent = impact_sampling.ksampler_wrapper(model2, seed2, steps2, cfg2, sampler_name2, scheduler2, positive2, negative2, + refined_latent, denoise2, refiner_ratio, refiner_model, refiner_clip, refiner_positive, refiner_negative) + + if detailer_hook is not None: + refined_latent = detailer_hook.pre_decode(refined_latent) + + # non-latent downscale - latent downscale cause bad quality + refined_image = vae.decode(refined_latent['samples']) + + if detailer_hook is not None: + refined_image = detailer_hook.post_decode(refined_image) + + # downscale + refined_image = tensor_resize(refined_image, w, h) + + # prevent mixing of device + refined_image = refined_image.cpu() + + # don't convert to latent - latent break image + # preserving pil is much better + return refined_image, cnet_pils + + +def enhance_detail_for_animatediff(image_frames, model, clip, vae, guide_size, guide_size_for_bbox, max_size, bbox, seed, steps, cfg, + sampler_name, + scheduler, positive, negative, denoise, noise_mask, + wildcard_opt=None, wildcard_opt_concat_mode=None, + detailer_hook=None, + refiner_ratio=None, refiner_model=None, refiner_clip=None, refiner_positive=None, + refiner_negative=None, control_net_wrapper=None, inpaint_model=False, noise_mask_feather=0): + if noise_mask is not None: + noise_mask = utils.tensor_gaussian_blur_mask(noise_mask, noise_mask_feather) + noise_mask = noise_mask.squeeze(3) + + if wildcard_opt is not None and wildcard_opt != "": + model, _, wildcard_positive = wildcards.process_with_loras(wildcard_opt, model, clip) + + if wildcard_opt_concat_mode == "concat": + positive = nodes.ConditioningConcat().concat(positive, wildcard_positive)[0] + else: + positive = wildcard_positive + + h = image_frames.shape[1] + w = image_frames.shape[2] + + bbox_h = bbox[3] - bbox[1] + bbox_w = bbox[2] - bbox[0] + + # Skip processing if the detected bbox is already larger than the guide_size + if guide_size_for_bbox: # == "bbox" + # Scale up based on the smaller dimension between width and height. + upscale = guide_size / min(bbox_w, bbox_h) + else: + # for cropped_size + upscale = guide_size / min(w, h) + + new_w = int(w * upscale) + new_h = int(h * upscale) + + # safeguard + if 'aitemplate_keep_loaded' in model.model_options: + max_size = min(4096, max_size) + + if new_w > max_size or new_h > max_size: + upscale *= max_size / max(new_w, new_h) + new_w = int(w * upscale) + new_h = int(h * upscale) + + if upscale <= 1.0 or new_w == 0 or new_h == 0: + print(f"Detailer: force inpaint") + upscale = 1.0 + new_w = w + new_h = h + + if detailer_hook is not None: + new_w, new_h = detailer_hook.touch_scaled_size(new_w, new_h) + + print(f"Detailer: segment upscale for ({bbox_w, bbox_h}) | crop region {w, h} x {upscale} -> {new_w, new_h}") + + # upscale the mask tensor by a factor of 2 using bilinear interpolation + if isinstance(noise_mask, np.ndarray): + noise_mask = torch.from_numpy(noise_mask) + + if len(noise_mask.shape) == 2: + noise_mask = noise_mask.unsqueeze(0) + else: # == 3 + noise_mask = noise_mask + + upscaled_mask = None + + for single_mask in noise_mask: + single_mask = single_mask.unsqueeze(0).unsqueeze(0) + upscaled_single_mask = torch.nn.functional.interpolate(single_mask, size=(new_h, new_w), mode='bilinear', align_corners=False) + upscaled_single_mask = upscaled_single_mask.squeeze(0) + + if upscaled_mask is None: + upscaled_mask = upscaled_single_mask + else: + upscaled_mask = torch.cat((upscaled_mask, upscaled_single_mask), dim=0) + + latent_frames = None + for image in image_frames: + image = torch.from_numpy(image).unsqueeze(0) + + # upscale + upscaled_image = tensor_resize(image, new_w, new_h) + + # ksampler + samples = to_latent_image(upscaled_image, vae)['samples'] + + if latent_frames is None: + latent_frames = samples + else: + latent_frames = torch.concat((latent_frames, samples), dim=0) + + cnet_images = None + if control_net_wrapper is not None: + positive, negative, cnet_images = control_net_wrapper.apply(positive, negative, torch.from_numpy(image_frames), noise_mask, use_acn=True) + + if len(upscaled_mask) != len(image_frames) and len(upscaled_mask) > 1: + print(f"[Impact Pack] WARN: DetailerForAnimateDiff - The number of the mask frames({len(upscaled_mask)}) and the image frames({len(image_frames)}) are different. Combine the mask frames and apply.") + combined_mask = upscaled_mask[0].to(torch.uint8) + + for frame_mask in upscaled_mask[1:]: + combined_mask |= (frame_mask * 255).to(torch.uint8) + + combined_mask = (combined_mask/255.0).to(torch.float32) + + upscaled_mask = combined_mask.expand(len(image_frames), -1, -1) + upscaled_mask = utils.to_binary_mask(upscaled_mask, 0.1) + + latent = { + 'noise_mask': upscaled_mask, + 'samples': latent_frames + } + + if detailer_hook is not None: + latent = detailer_hook.post_encode(latent) + + refined_latent = impact_sampling.ksampler_wrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, + latent, denoise, refiner_ratio, refiner_model, refiner_clip, refiner_positive, refiner_negative) + + if detailer_hook is not None: + refined_latent = detailer_hook.pre_decode(refined_latent) + + refined_image_frames = None + for refined_sample in refined_latent['samples']: + refined_sample = refined_sample.unsqueeze(0) + + # non-latent downscale - latent downscale cause bad quality + refined_image = vae.decode(refined_sample) + + if refined_image_frames is None: + refined_image_frames = refined_image + else: + refined_image_frames = torch.concat((refined_image_frames, refined_image), dim=0) + + if detailer_hook is not None: + refined_image_frames = detailer_hook.post_decode(refined_image_frames) + + refined_image_frames = nodes.ImageScale().upscale(image=refined_image_frames, upscale_method='lanczos', width=w, height=h, crop='disabled')[0] + + return refined_image_frames, cnet_images + + +def composite_to(dest_latent, crop_region, src_latent): + x1 = crop_region[0] + y1 = crop_region[1] + + # composite to original latent + lc = nodes.LatentComposite() + orig_image = lc.composite(dest_latent, src_latent, x1, y1) + + return orig_image[0] + + +def sam_predict(predictor, points, plabs, bbox, threshold): + point_coords = None if not points else np.array(points) + point_labels = None if not plabs else np.array(plabs) + + box = np.array([bbox]) if bbox is not None else None + + cur_masks, scores, _ = predictor.predict(point_coords=point_coords, point_labels=point_labels, box=box) + + total_masks = [] + + selected = False + max_score = 0 + max_mask = None + for idx in range(len(scores)): + if scores[idx] > max_score: + max_score = scores[idx] + max_mask = cur_masks[idx] + + if scores[idx] >= threshold: + selected = True + total_masks.append(cur_masks[idx]) + else: + pass + + if not selected and max_mask is not None: + total_masks.append(max_mask) + + return total_masks + + +def make_sam_mask(sam_model, segs, image, detection_hint, dilation, + threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative): + if sam_model.is_auto_mode: + device = comfy.model_management.get_torch_device() + sam_model.safe_to.to_device(sam_model, device=device) + + try: + predictor = SamPredictor(sam_model) + image = np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8) + predictor.set_image(image, "RGB") + + total_masks = [] + + use_small_negative = mask_hint_use_negative == "Small" + + # seg_shape = segs[0] + segs = segs[1] + if detection_hint == "mask-points": + points = [] + plabs = [] + + for i in range(len(segs)): + bbox = segs[i].bbox + center = center_of_bbox(segs[i].bbox) + points.append(center) + + # small point is background, big point is foreground + if use_small_negative and bbox[2] - bbox[0] < 10: + plabs.append(0) + else: + plabs.append(1) + + detected_masks = sam_predict(predictor, points, plabs, None, threshold) + total_masks += detected_masks + + else: + for i in range(len(segs)): + bbox = segs[i].bbox + center = center_of_bbox(bbox) + + x1 = max(bbox[0] - bbox_expansion, 0) + y1 = max(bbox[1] - bbox_expansion, 0) + x2 = min(bbox[2] + bbox_expansion, image.shape[1]) + y2 = min(bbox[3] + bbox_expansion, image.shape[0]) + + dilated_bbox = [x1, y1, x2, y2] + + points = [] + plabs = [] + if detection_hint == "center-1": + points.append(center) + plabs = [1] # 1 = foreground point, 0 = background point + + elif detection_hint == "horizontal-2": + gap = (x2 - x1) / 3 + points.append((x1 + gap, center[1])) + points.append((x1 + gap * 2, center[1])) + plabs = [1, 1] + + elif detection_hint == "vertical-2": + gap = (y2 - y1) / 3 + points.append((center[0], y1 + gap)) + points.append((center[0], y1 + gap * 2)) + plabs = [1, 1] + + elif detection_hint == "rect-4": + x_gap = (x2 - x1) / 3 + y_gap = (y2 - y1) / 3 + points.append((x1 + x_gap, center[1])) + points.append((x1 + x_gap * 2, center[1])) + points.append((center[0], y1 + y_gap)) + points.append((center[0], y1 + y_gap * 2)) + plabs = [1, 1, 1, 1] + + elif detection_hint == "diamond-4": + x_gap = (x2 - x1) / 3 + y_gap = (y2 - y1) / 3 + points.append((x1 + x_gap, y1 + y_gap)) + points.append((x1 + x_gap * 2, y1 + y_gap)) + points.append((x1 + x_gap, y1 + y_gap * 2)) + points.append((x1 + x_gap * 2, y1 + y_gap * 2)) + plabs = [1, 1, 1, 1] + + elif detection_hint == "mask-point-bbox": + center = center_of_bbox(segs[i].bbox) + points.append(center) + plabs = [1] + + elif detection_hint == "mask-area": + points, plabs = gen_detection_hints_from_mask_area(segs[i].crop_region[0], segs[i].crop_region[1], + segs[i].cropped_mask, + mask_hint_threshold, use_small_negative) + + if mask_hint_use_negative == "Outter": + npoints, nplabs = gen_negative_hints(image.shape[0], image.shape[1], + segs[i].crop_region[0], segs[i].crop_region[1], + segs[i].crop_region[2], segs[i].crop_region[3]) + + points += npoints + plabs += nplabs + + detected_masks = sam_predict(predictor, points, plabs, dilated_bbox, threshold) + total_masks += detected_masks + + # merge every collected masks + mask = combine_masks2(total_masks) + + finally: + if sam_model.is_auto_mode: + sam_model.to(device="cpu") + + if mask is not None: + mask = mask.float() + mask = dilate_mask(mask.cpu().numpy(), dilation) + mask = torch.from_numpy(mask) + else: + size = image.shape[0], image.shape[1] + mask = torch.zeros(size, dtype=torch.float32, device="cpu") # empty mask + + mask = utils.make_3d_mask(mask) + return mask + + +def generate_detection_hints(image, seg, center, detection_hint, dilated_bbox, mask_hint_threshold, use_small_negative, + mask_hint_use_negative): + [x1, y1, x2, y2] = dilated_bbox + + points = [] + plabs = [] + if detection_hint == "center-1": + points.append(center) + plabs = [1] # 1 = foreground point, 0 = background point + + elif detection_hint == "horizontal-2": + gap = (x2 - x1) / 3 + points.append((x1 + gap, center[1])) + points.append((x1 + gap * 2, center[1])) + plabs = [1, 1] + + elif detection_hint == "vertical-2": + gap = (y2 - y1) / 3 + points.append((center[0], y1 + gap)) + points.append((center[0], y1 + gap * 2)) + plabs = [1, 1] + + elif detection_hint == "rect-4": + x_gap = (x2 - x1) / 3 + y_gap = (y2 - y1) / 3 + points.append((x1 + x_gap, center[1])) + points.append((x1 + x_gap * 2, center[1])) + points.append((center[0], y1 + y_gap)) + points.append((center[0], y1 + y_gap * 2)) + plabs = [1, 1, 1, 1] + + elif detection_hint == "diamond-4": + x_gap = (x2 - x1) / 3 + y_gap = (y2 - y1) / 3 + points.append((x1 + x_gap, y1 + y_gap)) + points.append((x1 + x_gap * 2, y1 + y_gap)) + points.append((x1 + x_gap, y1 + y_gap * 2)) + points.append((x1 + x_gap * 2, y1 + y_gap * 2)) + plabs = [1, 1, 1, 1] + + elif detection_hint == "mask-point-bbox": + center = center_of_bbox(seg.bbox) + points.append(center) + plabs = [1] + + elif detection_hint == "mask-area": + points, plabs = gen_detection_hints_from_mask_area(seg.crop_region[0], seg.crop_region[1], + seg.cropped_mask, + mask_hint_threshold, use_small_negative) + + if mask_hint_use_negative == "Outter": + npoints, nplabs = gen_negative_hints(image.shape[0], image.shape[1], + seg.crop_region[0], seg.crop_region[1], + seg.crop_region[2], seg.crop_region[3]) + + points += npoints + plabs += nplabs + + return points, plabs + + +def convert_and_stack_masks(masks): + if len(masks) == 0: + return None + + mask_tensors = [] + for mask in masks: + mask_array = np.array(mask, dtype=np.uint8) + mask_tensor = torch.from_numpy(mask_array) + mask_tensors.append(mask_tensor) + + stacked_masks = torch.stack(mask_tensors, dim=0) + stacked_masks = stacked_masks.unsqueeze(1) + + return stacked_masks + + +def merge_and_stack_masks(stacked_masks, group_size): + if stacked_masks is None: + return None + + num_masks = stacked_masks.size(0) + merged_masks = [] + + for i in range(0, num_masks, group_size): + subset_masks = stacked_masks[i:i + group_size] + merged_mask = torch.any(subset_masks, dim=0) + merged_masks.append(merged_mask) + + if len(merged_masks) > 0: + merged_masks = torch.stack(merged_masks, dim=0) + + return merged_masks + + +def segs_scale_match(segs, target_shape): + h = segs[0][0] + w = segs[0][1] + + th = target_shape[1] + tw = target_shape[2] + + if (h == th and w == tw) or h == 0 or w == 0: + return segs + + rh = th / h + rw = tw / w + + new_segs = [] + for seg in segs[1]: + cropped_image = seg.cropped_image + cropped_mask = seg.cropped_mask + x1, y1, x2, y2 = seg.crop_region + bx1, by1, bx2, by2 = seg.bbox + + crop_region = int(x1*rw), int(y1*rw), int(x2*rh), int(y2*rh) + bbox = int(bx1*rw), int(by1*rw), int(bx2*rh), int(by2*rh) + new_w = crop_region[2] - crop_region[0] + new_h = crop_region[3] - crop_region[1] + + cropped_mask = torch.from_numpy(cropped_mask) + cropped_mask = torch.nn.functional.interpolate(cropped_mask.unsqueeze(0).unsqueeze(0), size=(new_h, new_w), mode='bilinear', align_corners=False) + cropped_mask = cropped_mask.squeeze(0).squeeze(0).numpy() + + if cropped_image is not None: + cropped_image = tensor_resize(cropped_image if isinstance(cropped_image, torch.Tensor) else torch.from_numpy(cropped_image), new_w, new_h) + cropped_image = cropped_image.numpy() + + new_seg = SEG(cropped_image, cropped_mask, seg.confidence, crop_region, bbox, seg.label, seg.control_net_wrapper) + new_segs.append(new_seg) + + return (th, tw), new_segs + + +# Used Python's slicing feature. stacked_masks[2::3] means starting from index 2, selecting every third tensor with a step size of 3. +# This allows for quickly obtaining the last tensor of every three tensors in stacked_masks. +def every_three_pick_last(stacked_masks): + selected_masks = stacked_masks[2::3] + return selected_masks + + +def make_sam_mask_segmented(sam_model, segs, image, detection_hint, dilation, + threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative): + if sam_model.is_auto_mode: + device = comfy.model_management.get_torch_device() + sam_model.safe_to.to_device(sam_model, device=device) + + try: + predictor = SamPredictor(sam_model) + image = np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8) + predictor.set_image(image, "RGB") + + total_masks = [] + + use_small_negative = mask_hint_use_negative == "Small" + + # seg_shape = segs[0] + segs = segs[1] + if detection_hint == "mask-points": + points = [] + plabs = [] + + for i in range(len(segs)): + bbox = segs[i].bbox + center = center_of_bbox(bbox) + points.append(center) + + # small point is background, big point is foreground + if use_small_negative and bbox[2] - bbox[0] < 10: + plabs.append(0) + else: + plabs.append(1) + + detected_masks = sam_predict(predictor, points, plabs, None, threshold) + total_masks += detected_masks + + else: + for i in range(len(segs)): + bbox = segs[i].bbox + center = center_of_bbox(bbox) + x1 = max(bbox[0] - bbox_expansion, 0) + y1 = max(bbox[1] - bbox_expansion, 0) + x2 = min(bbox[2] + bbox_expansion, image.shape[1]) + y2 = min(bbox[3] + bbox_expansion, image.shape[0]) + + dilated_bbox = [x1, y1, x2, y2] + + points, plabs = generate_detection_hints(image, segs[i], center, detection_hint, dilated_bbox, + mask_hint_threshold, use_small_negative, + mask_hint_use_negative) + + detected_masks = sam_predict(predictor, points, plabs, dilated_bbox, threshold) + + total_masks += detected_masks + + # merge every collected masks + mask = combine_masks2(total_masks) + + finally: + if sam_model.is_auto_mode: + sam_model.cpu() + + pass + + mask_working_device = torch.device("cpu") + + if mask is not None: + mask = mask.float() + mask = dilate_mask(mask.cpu().numpy(), dilation) + mask = torch.from_numpy(mask) + mask = mask.to(device=mask_working_device) + else: + # Extracting batch, height and width + height, width, _ = image.shape + mask = torch.zeros( + (height, width), dtype=torch.float32, device=mask_working_device + ) # empty mask + + stacked_masks = convert_and_stack_masks(total_masks) + + return (mask, merge_and_stack_masks(stacked_masks, group_size=3)) + # return every_three_pick_last(stacked_masks) + + +def segs_bitwise_and_mask(segs, mask): + mask = make_2d_mask(mask) + + if mask is None: + print("[SegsBitwiseAndMask] Cannot operate: MASK is empty.") + return ([],) + + items = [] + + mask = (mask.cpu().numpy() * 255).astype(np.uint8) + + for seg in segs[1]: + cropped_mask = (seg.cropped_mask * 255).astype(np.uint8) + crop_region = seg.crop_region + + cropped_mask2 = mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] + + new_mask = np.bitwise_and(cropped_mask.astype(np.uint8), cropped_mask2) + new_mask = new_mask.astype(np.float32) / 255.0 + + item = SEG(seg.cropped_image, new_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, None) + items.append(item) + + return segs[0], items + + +def apply_mask_to_each_seg(segs, masks): + if masks is None: + print("[SegsBitwiseAndMask] Cannot operate: MASK is empty.") + return (segs[0], [],) + + items = [] + + masks = masks.squeeze(1) + + for seg, mask in zip(segs[1], masks): + cropped_mask = (seg.cropped_mask * 255).astype(np.uint8) + crop_region = seg.crop_region + + cropped_mask2 = (mask.cpu().numpy() * 255).astype(np.uint8) + cropped_mask2 = cropped_mask2[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] + + new_mask = np.bitwise_and(cropped_mask.astype(np.uint8), cropped_mask2) + new_mask = new_mask.astype(np.float32) / 255.0 + + item = SEG(seg.cropped_image, new_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, None) + items.append(item) + + return segs[0], items + + +def dilate_segs(segs, factor): + if factor == 0: + return segs + + new_segs = [] + for seg in segs[1]: + new_mask = dilate_mask(seg.cropped_mask, factor) + new_seg = SEG(seg.cropped_image, new_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + new_segs.append(new_seg) + + return (segs[0], new_segs) + + +class ONNXDetector: + onnx_model = None + + def __init__(self, onnx_model): + self.onnx_model = onnx_model + + def detect(self, image, threshold, dilation, crop_factor, drop_size=1, detailer_hook=None): + drop_size = max(drop_size, 1) + try: + import impact.onnx as onnx + + h = image.shape[1] + w = image.shape[2] + + labels, scores, boxes = onnx.onnx_inference(image, self.onnx_model) + + # collect feasible item + result = [] + + for i in range(len(labels)): + if scores[i] > threshold: + item_bbox = boxes[i] + x1, y1, x2, y2 = item_bbox + + if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue + crop_region = make_crop_region(w, h, item_bbox, crop_factor) + + if detailer_hook is not None: + crop_region = item_bbox.post_crop_region(w, h, item_bbox, crop_region) + + crop_x1, crop_y1, crop_x2, crop_y2, = crop_region + + # prepare cropped mask + cropped_mask = np.zeros((crop_y2 - crop_y1, crop_x2 - crop_x1)) + cropped_mask[y1 - crop_y1:y2 - crop_y1, x1 - crop_x1:x2 - crop_x1] = 1 + cropped_mask = dilate_mask(cropped_mask, dilation) + + # make items. just convert the integer label to a string + item = SEG(None, cropped_mask, scores[i], crop_region, item_bbox, str(labels[i]), None) + result.append(item) + + shape = h, w + segs = shape, result + + if detailer_hook is not None and hasattr(detailer_hook, "post_detection"): + segs = detailer_hook.post_detection(segs) + + return segs + except Exception as e: + print(f"ONNXDetector: unable to execute.\n{e}") + pass + + def detect_combined(self, image, threshold, dilation): + return segs_to_combined_mask(self.detect(image, threshold, dilation, 1)) + + def setAux(self, x): + pass + + +def mask_to_segs(mask, combined, crop_factor, bbox_fill, drop_size=1, label='A', crop_min_size=None, detailer_hook=None, is_contour=True): + drop_size = max(drop_size, 1) + if mask is None: + print("[mask_to_segs] Cannot operate: MASK is empty.") + return ([],) + + if isinstance(mask, np.ndarray): + pass # `mask` is already a NumPy array + else: + try: + mask = mask.numpy() + except AttributeError: + print("[mask_to_segs] Cannot operate: MASK is not a NumPy array or Tensor.") + return ([],) + + if mask is None: + print("[mask_to_segs] Cannot operate: MASK is empty.") + return ([],) + + result = [] + + if len(mask.shape) == 2: + mask = np.expand_dims(mask, axis=0) + + for i in range(mask.shape[0]): + mask_i = mask[i] + + if combined: + indices = np.nonzero(mask_i) + if len(indices[0]) > 0 and len(indices[1]) > 0: + bbox = ( + np.min(indices[1]), + np.min(indices[0]), + np.max(indices[1]), + np.max(indices[0]), + ) + crop_region = make_crop_region( + mask_i.shape[1], mask_i.shape[0], bbox, crop_factor + ) + x1, y1, x2, y2 = crop_region + + if detailer_hook is not None: + crop_region = detailer_hook.post_crop_region(mask_i.shape[1], mask_i.shape[0], bbox, crop_region) + + if x2 - x1 > 0 and y2 - y1 > 0: + cropped_mask = mask_i[y1:y2, x1:x2] + + if bbox_fill: + bx1, by1, bx2, by2 = bbox + cropped_mask = cropped_mask.copy() + cropped_mask[by1:by2, bx1:bx2] = 1.0 + + if cropped_mask is not None: + item = SEG(None, cropped_mask, 1.0, crop_region, bbox, label, None) + result.append(item) + + else: + mask_i_uint8 = (mask_i * 255.0).astype(np.uint8) + contours, ctree = cv2.findContours(mask_i_uint8, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) + for j, contour in enumerate(contours): + hierarchy = ctree[0][j] + if hierarchy[3] != -1: + continue + + separated_mask = np.zeros_like(mask_i_uint8) + cv2.drawContours(separated_mask, [contour], 0, 255, -1) + separated_mask = np.array(separated_mask / 255.0).astype(np.float32) + + x, y, w, h = cv2.boundingRect(contour) + bbox = x, y, x + w, y + h + crop_region = make_crop_region( + mask_i.shape[1], mask_i.shape[0], bbox, crop_factor, crop_min_size + ) + + if detailer_hook is not None: + crop_region = detailer_hook.post_crop_region(mask_i.shape[1], mask_i.shape[0], bbox, crop_region) + + if w > drop_size and h > drop_size: + if is_contour: + mask_src = separated_mask + else: + mask_src = mask_i * separated_mask + + cropped_mask = np.array( + mask_src[ + crop_region[1]: crop_region[3], + crop_region[0]: crop_region[2], + ] + ) + + if bbox_fill: + cx1, cy1, _, _ = crop_region + bx1 = x - cx1 + bx2 = x+w - cx1 + by1 = y - cy1 + by2 = y+h - cy1 + cropped_mask[by1:by2, bx1:bx2] = 1.0 + + if cropped_mask is not None: + cropped_mask = utils.to_binary_mask(torch.from_numpy(cropped_mask), 0.1)[0] + item = SEG(None, cropped_mask.numpy(), 1.0, crop_region, bbox, label, None) + result.append(item) + + if not result: + print(f"[mask_to_segs] Empty mask.") + + print(f"# of Detected SEGS: {len(result)}") + # for r in result: + # print(f"\tbbox={r.bbox}, crop={r.crop_region}, label={r.label}") + + # shape: (b,h,w) -> (h,w) + return (mask.shape[1], mask.shape[2]), result + + +def mediapipe_facemesh_to_segs(image, crop_factor, bbox_fill, crop_min_size, drop_size, dilation, face, mouth, left_eyebrow, left_eye, left_pupil, right_eyebrow, right_eye, right_pupil): + parts = { + "face": np.array([0x0A, 0xC8, 0x0A]), + "mouth": np.array([0x0A, 0xB4, 0x0A]), + "left_eyebrow": np.array([0xB4, 0xDC, 0x0A]), + "left_eye": np.array([0xB4, 0xC8, 0x0A]), + "left_pupil": np.array([0xFA, 0xC8, 0x0A]), + "right_eyebrow": np.array([0x0A, 0xDC, 0xB4]), + "right_eye": np.array([0x0A, 0xC8, 0xB4]), + "right_pupil": np.array([0x0A, 0xC8, 0xFA]), + } + + def create_segments(image, color): + image = (image * 255).to(torch.uint8) + image = image.squeeze(0).numpy() + mask = cv2.inRange(image, color, color) + + contours, ctree = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) + mask_list = [] + for i, contour in enumerate(contours): + hierarchy = ctree[0][i] + if hierarchy[3] == -1: + convex_hull = cv2.convexHull(contour) + convex_segment = np.zeros_like(image) + cv2.fillPoly(convex_segment, [convex_hull], (255, 255, 255)) + + convex_segment = np.expand_dims(convex_segment, axis=0).astype(np.float32) / 255.0 + tensor = torch.from_numpy(convex_segment) + mask_tensor = torch.any(tensor != 0, dim=-1).float() + mask_tensor = mask_tensor.squeeze(0) + mask_tensor = torch.from_numpy(dilate_mask(mask_tensor.numpy(), dilation)) + mask_list.append(mask_tensor.unsqueeze(0)) + + return mask_list + + segs = [] + + def create_seg(label): + mask_list = create_segments(image, parts[label]) + for mask in mask_list: + seg = mask_to_segs(mask, False, crop_factor, bbox_fill, drop_size=drop_size, label=label, crop_min_size=crop_min_size) + if len(seg[1]) > 0: + segs.extend(seg[1]) + + if face: + create_seg('face') + + if mouth: + create_seg('mouth') + + if left_eyebrow: + create_seg('left_eyebrow') + + if left_eye: + create_seg('left_eye') + + if left_pupil: + create_seg('left_pupil') + + if right_eyebrow: + create_seg('right_eyebrow') + + if right_eye: + create_seg('right_eye') + + if right_pupil: + create_seg('right_pupil') + + return (image.shape[1], image.shape[2]), segs + + +def segs_to_combined_mask(segs): + shape = segs[0] + h = shape[0] + w = shape[1] + + mask = np.zeros((h, w), dtype=np.uint8) + + for seg in segs[1]: + cropped_mask = seg.cropped_mask + crop_region = seg.crop_region + mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] |= (cropped_mask * 255).astype(np.uint8) + + return torch.from_numpy(mask.astype(np.float32) / 255.0) + + +def segs_to_masklist(segs): + shape = segs[0] + h = shape[0] + w = shape[1] + + masks = [] + for seg in segs[1]: + if isinstance(seg.cropped_mask, np.ndarray): + cropped_mask = torch.from_numpy(seg.cropped_mask) + else: + cropped_mask = seg.cropped_mask + + if cropped_mask.ndim == 2: + cropped_mask = cropped_mask.unsqueeze(0) + + n = len(cropped_mask) + + mask = torch.zeros((n, h, w), dtype=torch.uint8) + crop_region = seg.crop_region + mask[:, crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] |= (cropped_mask * 255).to(torch.uint8) + mask = (mask / 255.0).to(torch.float32) + + for x in mask: + masks.append(x) + + if len(masks) == 0: + empty_mask = torch.zeros((h, w), dtype=torch.float32, device="cpu") + masks = [empty_mask] + + return masks + + +def vae_decode(vae, samples, use_tile, hook, tile_size=512): + if use_tile: + pixels = nodes.VAEDecodeTiled().decode(vae, samples, tile_size)[0] + else: + pixels = nodes.VAEDecode().decode(vae, samples)[0] + + if hook is not None: + pixels = hook.post_decode(pixels) + + return pixels + + +def vae_encode(vae, pixels, use_tile, hook, tile_size=512): + if use_tile: + samples = nodes.VAEEncodeTiled().encode(vae, pixels, tile_size)[0] + else: + samples = nodes.VAEEncode().encode(vae, pixels)[0] + + if hook is not None: + samples = hook.post_encode(samples) + + return samples + + +def latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae, use_tile=False, tile_size=512, + save_temp_prefix=None, hook=None): + pixels = vae_decode(vae, samples, use_tile, hook, tile_size=tile_size) + + if save_temp_prefix is not None: + nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix) + + pixels = nodes.ImageScale().upscale(pixels, scale_method, int(w), int(h), False)[0] + + if hook is not None: + pixels = hook.post_upscale(pixels) + + return vae_encode(vae, pixels, use_tile, hook, tile_size=tile_size) + + +def latent_upscale_on_pixel_space2(samples, scale_method, scale_factor, vae, use_tile=False, tile_size=512, + save_temp_prefix=None, hook=None): + pixels = vae_decode(vae, samples, use_tile, hook, tile_size=tile_size) + + if save_temp_prefix is not None: + nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix) + + w = pixels.shape[2] * scale_factor + h = pixels.shape[1] * scale_factor + pixels = nodes.ImageScale().upscale(pixels, scale_method, int(w), int(h), False)[0] + + if hook is not None: + pixels = hook.post_upscale(pixels) + + return (vae_encode(vae, pixels, use_tile, hook, tile_size=tile_size), pixels) + + +def latent_upscale_on_pixel_space(samples, scale_method, scale_factor, vae, use_tile=False, tile_size=512, + save_temp_prefix=None, hook=None): + return latent_upscale_on_pixel_space2(samples, scale_method, scale_factor, vae, use_tile, tile_size, save_temp_prefix, hook)[0] + + +def latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, upscale_model, new_w, new_h, vae, + use_tile=False, tile_size=512, save_temp_prefix=None, hook=None): + pixels = vae_decode(vae, samples, use_tile, hook, tile_size=tile_size) + + if save_temp_prefix is not None: + nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix) + + w = pixels.shape[2] + + # upscale by model upscaler + current_w = w + while current_w < new_w: + pixels = model_upscale.ImageUpscaleWithModel().upscale(upscale_model, pixels)[0] + current_w = pixels.shape[2] + if current_w == w: + print(f"[latent_upscale_on_pixel_space_with_model] x1 upscale model selected") + break + + # downscale to target scale + pixels = nodes.ImageScale().upscale(pixels, scale_method, int(new_w), int(new_h), False)[0] + + if hook is not None: + pixels = hook.post_upscale(pixels) + + return vae_encode(vae, pixels, use_tile, hook, tile_size=tile_size) + + +def latent_upscale_on_pixel_space_with_model2(samples, scale_method, upscale_model, scale_factor, vae, use_tile=False, + tile_size=512, save_temp_prefix=None, hook=None): + pixels = vae_decode(vae, samples, use_tile, hook, tile_size=tile_size) + + if save_temp_prefix is not None: + nodes.PreviewImage().save_images(pixels, filename_prefix=save_temp_prefix) + + w = pixels.shape[2] + h = pixels.shape[1] + + new_w = w * scale_factor + new_h = h * scale_factor + + # upscale by model upscaler + current_w = w + while current_w < new_w: + pixels = model_upscale.ImageUpscaleWithModel().upscale(upscale_model, pixels)[0] + current_w = pixels.shape[2] + if current_w == w: + print(f"[latent_upscale_on_pixel_space_with_model] x1 upscale model selected") + break + + # downscale to target scale + pixels = nodes.ImageScale().upscale(pixels, scale_method, int(new_w), int(new_h), False)[0] + + if hook is not None: + pixels = hook.post_upscale(pixels) + + return (vae_encode(vae, pixels, use_tile, hook, tile_size=tile_size), pixels) + +def latent_upscale_on_pixel_space_with_model(samples, scale_method, upscale_model, scale_factor, vae, use_tile=False, + tile_size=512, save_temp_prefix=None, hook=None): + return latent_upscale_on_pixel_space_with_model2(samples, scale_method, upscale_model, scale_factor, vae, use_tile, tile_size, save_temp_prefix, hook)[0] + + +class TwoSamplersForMaskUpscaler: + def __init__(self, scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae, + full_sampler_opt=None, upscale_model_opt=None, hook_base_opt=None, hook_mask_opt=None, + hook_full_opt=None, + tile_size=512): + + mask = make_2d_mask(mask) + + mask = mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])) + + self.params = scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae + self.upscale_model = upscale_model_opt + self.full_sampler = full_sampler_opt + self.hook_base = hook_base_opt + self.hook_mask = hook_mask_opt + self.hook_full = hook_full_opt + self.use_tiled_vae = use_tiled_vae + self.tile_size = tile_size + self.vae = vae + + def upscale(self, step_info, samples, upscale_factor, save_temp_prefix=None): + scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae = self.params + + mask = make_2d_mask(mask) + + self.prepare_hook(step_info) + + # upscale latent + if self.upscale_model is None: + upscaled_latent = latent_upscale_on_pixel_space(samples, scale_method, upscale_factor, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, + hook=self.hook_base, tile_size=self.tile_size) + else: + upscaled_latent = latent_upscale_on_pixel_space_with_model(samples, scale_method, self.upscale_model, + upscale_factor, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, + hook=self.hook_mask, tile_size=self.tile_size) + + return self.do_samples(step_info, base_sampler, mask_sampler, sample_schedule, mask, upscaled_latent) + + def prepare_hook(self, step_info): + if self.hook_base is not None: + self.hook_base.set_steps(step_info) + if self.hook_mask is not None: + self.hook_mask.set_steps(step_info) + if self.hook_full is not None: + self.hook_full.set_steps(step_info) + + def upscale_shape(self, step_info, samples, w, h, save_temp_prefix=None): + scale_method, sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae = self.params + + mask = make_2d_mask(mask) + + self.prepare_hook(step_info) + + # upscale latent + if self.upscale_model is None: + upscaled_latent = latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, + hook=self.hook_base, + tile_size=self.tile_size) + else: + upscaled_latent = latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, self.upscale_model, + w, h, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, + hook=self.hook_mask, + tile_size=self.tile_size) + + return self.do_samples(step_info, base_sampler, mask_sampler, sample_schedule, mask, upscaled_latent) + + def is_full_sample_time(self, step_info, sample_schedule): + cur_step, total_step = step_info + + # make start from 1 instead of zero + cur_step += 1 + total_step += 1 + + if sample_schedule == "none": + return False + + elif sample_schedule == "interleave1": + return cur_step % 2 == 0 + + elif sample_schedule == "interleave2": + return cur_step % 3 == 0 + + elif sample_schedule == "interleave3": + return cur_step % 4 == 0 + + elif sample_schedule == "last1": + return cur_step == total_step + + elif sample_schedule == "last2": + return cur_step >= total_step - 1 + + elif sample_schedule == "interleave1+last1": + return cur_step % 2 == 0 or cur_step >= total_step - 1 + + elif sample_schedule == "interleave2+last1": + return cur_step % 2 == 0 or cur_step >= total_step - 1 + + elif sample_schedule == "interleave3+last1": + return cur_step % 2 == 0 or cur_step >= total_step - 1 + + def do_samples(self, step_info, base_sampler, mask_sampler, sample_schedule, mask, upscaled_latent): + mask = make_2d_mask(mask) + + if self.is_full_sample_time(step_info, sample_schedule): + print(f"step_info={step_info} / full time") + + upscaled_latent = base_sampler.sample(upscaled_latent, self.hook_base) + sampler = self.full_sampler if self.full_sampler is not None else base_sampler + return sampler.sample(upscaled_latent, self.hook_full) + + else: + print(f"step_info={step_info} / non-full time") + # upscale mask + if mask.ndim == 2: + mask = mask[None, :, :, None] + upscaled_mask = F.interpolate(mask, size=(upscaled_latent['samples'].shape[2], upscaled_latent['samples'].shape[3]), mode='bilinear', align_corners=True) + upscaled_mask = upscaled_mask[:, :, :upscaled_latent['samples'].shape[2], :upscaled_latent['samples'].shape[3]] + + # base sampler + upscaled_inv_mask = torch.where(upscaled_mask != 1.0, torch.tensor(1.0), torch.tensor(0.0)) + upscaled_latent['noise_mask'] = upscaled_inv_mask + upscaled_latent = base_sampler.sample(upscaled_latent, self.hook_base) + + # mask sampler + upscaled_latent['noise_mask'] = upscaled_mask + upscaled_latent = mask_sampler.sample(upscaled_latent, self.hook_mask) + + # remove mask + del upscaled_latent['noise_mask'] + return upscaled_latent + + +class PixelKSampleUpscaler: + def __init__(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, + use_tiled_vae, upscale_model_opt=None, hook_opt=None, tile_size=512): + self.params = scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise + self.upscale_model = upscale_model_opt + self.hook = hook_opt + self.use_tiled_vae = use_tiled_vae + self.tile_size = tile_size + self.is_tiled = False + self.vae = vae + + def upscale(self, step_info, samples, upscale_factor, save_temp_prefix=None): + scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params + + if self.hook is not None: + self.hook.set_steps(step_info) + + if self.upscale_model is None: + upscaled_latent = latent_upscale_on_pixel_space(samples, scale_method, upscale_factor, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, hook=self.hook) + else: + upscaled_latent = latent_upscale_on_pixel_space_with_model(samples, scale_method, self.upscale_model, + upscale_factor, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, + hook=self.hook, + tile_size=self.tile_size) + + if self.hook is not None: + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \ + self.hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, + upscaled_latent, denoise) + + refined_latent = nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler, + positive, negative, upscaled_latent, denoise)[0] + return refined_latent + + def upscale_shape(self, step_info, samples, w, h, save_temp_prefix=None): + scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params + + if self.hook is not None: + self.hook.set_steps(step_info) + + if self.upscale_model is None: + upscaled_latent = latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, hook=self.hook, + tile_size=self.tile_size) + else: + upscaled_latent = latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, self.upscale_model, + w, h, vae, + use_tile=self.use_tiled_vae, + save_temp_prefix=save_temp_prefix, + hook=self.hook, + tile_size=self.tile_size) + + if self.hook is not None: + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \ + self.hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, + upscaled_latent, denoise) + + refined_latent = nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler, + positive, negative, upscaled_latent, denoise)[0] + return refined_latent + + +class ControlNetWrapper: + def __init__(self, control_net, strength, preprocessor, prev_control_net=None, + original_size=None, crop_region=None, control_image=None): + self.control_net = control_net + self.strength = strength + self.preprocessor = preprocessor + self.prev_control_net = prev_control_net + + if original_size is not None and crop_region is not None and control_image is not None: + self.control_image = utils.tensor_resize(control_image, original_size[1], original_size[0]) + self.control_image = torch.tensor(utils.tensor_crop(self.control_image, crop_region)) + else: + self.control_image = None + + def apply(self, positive, negative, image, mask=None, use_acn=False): + cnet_image_list = [] + prev_cnet_images = [] + + if self.prev_control_net is not None: + positive, negative, prev_cnet_images = self.prev_control_net.apply(positive, negative, image, mask, use_acn=use_acn) + + if self.control_image is not None: + cnet_image = self.control_image + elif self.preprocessor is not None: + cnet_image = self.preprocessor.apply(image, mask) + else: + cnet_image = image + + cnet_image_list.extend(prev_cnet_images) + cnet_image_list.append(cnet_image) + + if use_acn: + if "ACN_AdvancedControlNetApply" in nodes.NODE_CLASS_MAPPINGS: + acn = nodes.NODE_CLASS_MAPPINGS['ACN_AdvancedControlNetApply']() + positive, negative, _ = acn.apply_controlnet(positive=positive, negative=negative, control_net=self.control_net, image=cnet_image, + strength=self.strength, start_percent=0.0, end_percent=1.0) + else: + utils.try_install_custom_node('https://github.com/BlenderNeko/ComfyUI_TiledKSampler', + "To use 'ControlNetWrapper' for AnimateDiff, 'ComfyUI-Advanced-ControlNet' extension is required.") + raise Exception("'ACN_AdvancedControlNetApply' node isn't installed.") + else: + positive = nodes.ControlNetApply().apply_controlnet(positive, self.control_net, cnet_image, self.strength)[0] + + return positive, negative, cnet_image_list + + +class ControlNetAdvancedWrapper: + def __init__(self, control_net, strength, start_percent, end_percent, preprocessor, prev_control_net=None, + original_size=None, crop_region=None, control_image=None): + self.control_net = control_net + self.strength = strength + self.preprocessor = preprocessor + self.prev_control_net = prev_control_net + self.start_percent = start_percent + self.end_percent = end_percent + + if original_size is not None and crop_region is not None and control_image is not None: + self.control_image = utils.tensor_resize(control_image, original_size[1], original_size[0]) + self.control_image = torch.tensor(utils.tensor_crop(self.control_image, crop_region)) + else: + self.control_image = None + + def apply(self, positive, negative, image, mask=None, use_acn=False): + cnet_image_list = [] + prev_cnet_images = [] + + if self.prev_control_net is not None: + positive, negative, prev_cnet_images = self.prev_control_net.apply(positive, negative, image, mask) + + if self.control_image is not None: + cnet_image = self.control_image + elif self.preprocessor is not None: + cnet_image = self.preprocessor.apply(image, mask) + else: + cnet_image = image + + cnet_image_list.extend(prev_cnet_images) + cnet_image_list.append(cnet_image) + + if use_acn: + if "ACN_AdvancedControlNetApply" in nodes.NODE_CLASS_MAPPINGS: + acn = nodes.NODE_CLASS_MAPPINGS['ACN_AdvancedControlNetApply']() + positive, negative, _ = acn.apply_controlnet(positive=positive, negative=negative, control_net=self.control_net, image=cnet_image, + strength=self.strength, start_percent=self.start_percent, end_percent=self.end_percent) + else: + utils.try_install_custom_node('https://github.com/BlenderNeko/ComfyUI_TiledKSampler', + "To use 'ControlNetAdvancedWrapper' for AnimateDiff, 'ComfyUI-Advanced-ControlNet' extension is required.") + raise Exception("'ACN_AdvancedControlNetApply' node isn't installed.") + else: + positive, negative = nodes.ControlNetApplyAdvanced().apply_controlnet(positive, negative, self.control_net, cnet_image, self.strength, self.start_percent, self.end_percent) + + return positive, negative, cnet_image_list + + +# REQUIREMENTS: BlenderNeko/ComfyUI_TiledKSampler +class TiledKSamplerWrapper: + params = None + + def __init__(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, + tile_width, tile_height, tiling_strategy): + self.params = model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy + + def sample(self, latent_image, hook=None): + if "BNK_TiledKSampler" in nodes.NODE_CLASS_MAPPINGS: + TiledKSampler = nodes.NODE_CLASS_MAPPINGS['BNK_TiledKSampler'] + else: + utils.try_install_custom_node('https://github.com/BlenderNeko/ComfyUI_TiledKSampler', + "To use 'TiledKSamplerProvider', 'Tiled sampling for ComfyUI' extension is required.") + raise Exception("'BNK_TiledKSampler' node isn't installed.") + + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy = self.params + + if hook is not None: + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \ + hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, + denoise) + + return TiledKSampler().sample(model, seed, tile_width, tile_height, tiling_strategy, steps, cfg, sampler_name, + scheduler, positive, negative, latent_image, denoise)[0] + + +class PixelTiledKSampleUpscaler: + def __init__(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, + denoise, + tile_width, tile_height, tiling_strategy, + upscale_model_opt=None, hook_opt=None, tile_size=512): + self.params = scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise + self.vae = vae + self.tile_params = tile_width, tile_height, tiling_strategy + self.upscale_model = upscale_model_opt + self.hook = hook_opt + self.tile_size = tile_size + self.is_tiled = True + + def tiled_ksample(self, latent): + if "BNK_TiledKSampler" in nodes.NODE_CLASS_MAPPINGS: + TiledKSampler = nodes.NODE_CLASS_MAPPINGS['BNK_TiledKSampler'] + else: + utils.try_install_custom_node('https://github.com/BlenderNeko/ComfyUI_TiledKSampler', + "To use 'PixelTiledKSampleUpscalerProvider', 'Tiled sampling for ComfyUI' extension is required.") + raise Exception("'BNK_TiledKSampler' node isn't installed.") + + scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params + tile_width, tile_height, tiling_strategy = self.tile_params + + return TiledKSampler().sample(model, seed, tile_width, tile_height, tiling_strategy, steps, cfg, sampler_name, + scheduler, positive, negative, latent, denoise)[0] + + def upscale(self, step_info, samples, upscale_factor, save_temp_prefix=None): + scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params + + if self.hook is not None: + self.hook.set_steps(step_info) + + if self.upscale_model is None: + upscaled_latent = latent_upscale_on_pixel_space(samples, scale_method, upscale_factor, vae, + use_tile=True, save_temp_prefix=save_temp_prefix, + hook=self.hook, + tile_size=self.tile_size) + else: + upscaled_latent = latent_upscale_on_pixel_space_with_model(samples, scale_method, self.upscale_model, + upscale_factor, vae, + use_tile=True, + save_temp_prefix=save_temp_prefix, + hook=self.hook, + tile_size=self.tile_size) + + refined_latent = self.tiled_ksample(upscaled_latent) + + return refined_latent + + def upscale_shape(self, step_info, samples, w, h, save_temp_prefix=None): + scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params + + if self.hook is not None: + self.hook.set_steps(step_info) + + if self.upscale_model is None: + upscaled_latent = latent_upscale_on_pixel_space_shape(samples, scale_method, w, h, vae, + use_tile=True, save_temp_prefix=save_temp_prefix, + hook=self.hook, tile_size=self.tile_size) + else: + upscaled_latent = latent_upscale_on_pixel_space_with_model_shape(samples, scale_method, + self.upscale_model, w, h, vae, + use_tile=True, + save_temp_prefix=save_temp_prefix, + hook=self.hook, + tile_size=self.tile_size) + + refined_latent = self.tiled_ksample(upscaled_latent) + + return refined_latent + + +# REQUIREMENTS: biegert/ComfyUI-CLIPSeg +class BBoxDetectorBasedOnCLIPSeg: + prompt = None + blur = None + threshold = None + dilation_factor = None + aux = None + + def __init__(self, prompt, blur, threshold, dilation_factor): + self.prompt = prompt + self.blur = blur + self.threshold = threshold + self.dilation_factor = dilation_factor + + def detect(self, image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size=1, detailer_hook=None): + mask = self.detect_combined(image, bbox_threshold, bbox_dilation) + + mask = make_2d_mask(mask) + + segs = mask_to_segs(mask, False, bbox_crop_factor, True, drop_size, detailer_hook=detailer_hook) + + if detailer_hook is not None and hasattr(detailer_hook, "post_detection"): + segs = detailer_hook.post_detection(segs) + + return segs + + def detect_combined(self, image, bbox_threshold, bbox_dilation): + if "CLIPSeg" in nodes.NODE_CLASS_MAPPINGS: + CLIPSeg = nodes.NODE_CLASS_MAPPINGS['CLIPSeg'] + else: + utils.try_install_custom_node('https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py', + "To use 'CLIPSegDetectorProvider', 'CLIPSeg' extension is required.") + raise Exception("'CLIPSeg' node isn't installed.") + + if self.threshold is None: + threshold = bbox_threshold + else: + threshold = self.threshold + + if self.dilation_factor is None: + dilation_factor = bbox_dilation + else: + dilation_factor = self.dilation_factor + + prompt = self.aux if self.prompt == '' and self.aux is not None else self.prompt + + mask, _, _ = CLIPSeg().segment_image(image, prompt, self.blur, threshold, dilation_factor) + mask = to_binary_mask(mask) + return mask + + def setAux(self, x): + self.aux = x + + +def update_node_status(node, text, progress=None): + if PromptServer.instance.client_id is None: + return + + PromptServer.instance.send_sync("impact/update_status", { + "node": node, + "progress": progress, + "text": text + }, PromptServer.instance.client_id) + + +def random_mask_raw(mask, bbox, factor): + x1, y1, x2, y2 = bbox + w = x2 - x1 + h = y2 - y1 + + factor = int(min(w, h) * factor / 4) + + def draw_random_circle(center, radius): + i, j = center + for x in range(int(i - radius), int(i + radius)): + for y in range(int(j - radius), int(j + radius)): + if np.linalg.norm(np.array([x, y]) - np.array([i, j])) <= radius: + mask[x, y] = 1 + + def draw_irregular_line(start, end, pivot, is_vertical): + i = start + while i < end: + base_radius = np.random.randint(5, factor) + radius = int(base_radius) + + if is_vertical: + draw_random_circle((i, pivot), radius) + else: + draw_random_circle((pivot, i), radius) + + i += radius + + def draw_irregular_line_parallel(start, end, pivot, is_vertical): + with ThreadPoolExecutor(max_workers=16) as executor: + futures = [] + step = (end - start) // 16 + for i in range(start, end, step): + future = executor.submit(draw_irregular_line, i, min(i + step, end), pivot, is_vertical) + futures.append(future) + + for future in futures: + future.result() + + draw_irregular_line_parallel(y1 + factor, y2 - factor, x1 + factor, True) + draw_irregular_line_parallel(y1 + factor, y2 - factor, x2 - factor, True) + draw_irregular_line_parallel(x1 + factor, x2 - factor, y1 + factor, False) + draw_irregular_line_parallel(x1 + factor, x2 - factor, y2 - factor, False) + + mask[y1 + factor:y2 - factor, x1 + factor:x2 - factor] = 1.0 + + +def random_mask(mask, bbox, factor, size=128): + small_mask = np.zeros((size, size)).astype(np.float32) + random_mask_raw(small_mask, (0, 0, size, size), factor) + + x1, y1, x2, y2 = bbox + small_mask = torch.tensor(small_mask).unsqueeze(0).unsqueeze(0) + bbox_mask = torch.nn.functional.interpolate(small_mask, size=(y2 - y1, x2 - x1), mode='bilinear', align_corners=False) + bbox_mask = bbox_mask.squeeze(0).squeeze(0) + mask[y1:y2, x1:x2] = bbox_mask + + +def adaptive_mask_paste(dest_mask, src_mask, bbox): + x1, y1, x2, y2 = bbox + small_mask = torch.tensor(src_mask).unsqueeze(0).unsqueeze(0) + bbox_mask = torch.nn.functional.interpolate(small_mask, size=(y2 - y1, x2 - x1), mode='bilinear', align_corners=False) + bbox_mask = bbox_mask.squeeze(0).squeeze(0) + dest_mask[y1:y2, x1:x2] = bbox_mask + + +class SafeToGPU: + def __init__(self, size): + self.size = size + + def to_device(self, obj, device): + if utils.is_same_device(device, 'cpu'): + obj.to(device) + else: + if utils.is_same_device(obj.device, 'cpu'): # cpu to gpu + model_management.free_memory(self.size * 1.3, device) + if model_management.get_free_memory(device) > self.size * 1.3: + try: + obj.to(device) + except: + print(f"WARN: The model is not moved to the '{device}' due to insufficient memory. [1]") + else: + print(f"WARN: The model is not moved to the '{device}' due to insufficient memory. [2]") + + +from comfy.cli_args import args, LatentPreviewMethod +import folder_paths +from latent_preview import TAESD, TAESDPreviewerImpl, Latent2RGBPreviewer + +try: + import comfy.latent_formats as latent_formats + + + def get_previewer(device, latent_format=latent_formats.SD15(), force=False, method=None): + previewer = None + + if method is None: + method = args.preview_method + + if method != LatentPreviewMethod.NoPreviews or force: + # TODO previewer methods + taesd_decoder_path = folder_paths.get_full_path("vae_approx", latent_format.taesd_decoder_name) + + if method == LatentPreviewMethod.Auto: + method = LatentPreviewMethod.Latent2RGB + if taesd_decoder_path: + method = LatentPreviewMethod.TAESD + + if method == LatentPreviewMethod.TAESD: + if taesd_decoder_path: + taesd = TAESD(None, taesd_decoder_path).to(device) + previewer = TAESDPreviewerImpl(taesd) + else: + print("Warning: TAESD previews enabled, but could not find models/vae_approx/{}".format( + latent_format.taesd_decoder_name)) + + if previewer is None: + previewer = Latent2RGBPreviewer(latent_format.latent_rgb_factors) + return previewer + +except: + print(f"#########################################################################") + print(f"[ERROR] ComfyUI-Impact-Pack: Please update ComfyUI to the latest version.") + print(f"#########################################################################") diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/defs.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/defs.py new file mode 100644 index 0000000000000000000000000000000000000000..c898f8c7cb5eb0fe244325f7e5cb4c5600c33a5f --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/defs.py @@ -0,0 +1,17 @@ +detection_labels = [ + 'hand', 'face', 'mouth', 'eyes', 'eyebrows', 'pupils', + 'left_eyebrow', 'left_eye', 'left_pupil', 'right_eyebrow', 'right_eye', 'right_pupil', + 'short_sleeved_shirt', 'long_sleeved_shirt', 'short_sleeved_outwear', 'long_sleeved_outwear', + 'vest', 'sling', 'shorts', 'trousers', 'skirt', 'short_sleeved_dress', 'long_sleeved_dress', 'vest_dress', 'sling_dress', + "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", + "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", + "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", + "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", + "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", + "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", + "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", + "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", + "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", + "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", + "hair drier", "toothbrush" + ] \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/detectors.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/detectors.py new file mode 100644 index 0000000000000000000000000000000000000000..2806f2357838f712f8d5fde70962e53375bb401b --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/detectors.py @@ -0,0 +1,460 @@ +import impact.core as core +from impact.config import MAX_RESOLUTION +import impact.segs_nodes as segs_nodes +import impact.utils as utils +import torch +from impact.core import SEG + + +class SAMDetectorCombined: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "sam_model": ("SAM_MODEL", ), + "segs": ("SEGS", ), + "image": ("IMAGE", ), + "detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area", + "mask-points", "mask-point-bbox", "none"],), + "dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + "mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + "mask_hint_use_negative": (["False", "Small", "Outter"], ) + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + def doit(self, sam_model, segs, image, detection_hint, dilation, + threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative): + return (core.make_sam_mask(sam_model, segs, image, detection_hint, dilation, + threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative), ) + + +class SAMDetectorSegmented: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "sam_model": ("SAM_MODEL", ), + "segs": ("SEGS", ), + "image": ("IMAGE", ), + "detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area", + "mask-points", "mask-point-bbox", "none"],), + "dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + "mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + "mask_hint_use_negative": (["False", "Small", "Outter"], ) + } + } + + RETURN_TYPES = ("MASK", "MASK") + RETURN_NAMES = ("combined_mask", "batch_masks") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + def doit(self, sam_model, segs, image, detection_hint, dilation, + threshold, bbox_expansion, mask_hint_threshold, mask_hint_use_negative): + combined_mask, batch_masks = core.make_sam_mask_segmented(sam_model, segs, image, detection_hint, dilation, + threshold, bbox_expansion, mask_hint_threshold, + mask_hint_use_negative) + return (combined_mask, batch_masks, ) + + +class BboxDetectorForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "bbox_detector": ("BBOX_DETECTOR", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + "labels": ("STRING", {"multiline": True, "default": "all", "placeholder": "List the types of segments to be allowed, separated by commas"}), + }, + "optional": {"detailer_hook": ("DETAILER_HOOK",), } + } + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + def doit(self, bbox_detector, image, threshold, dilation, crop_factor, drop_size, labels=None, detailer_hook=None): + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: BboxDetectorForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + segs = bbox_detector.detect(image, threshold, dilation, crop_factor, drop_size, detailer_hook) + + if labels is not None and labels != '': + labels = labels.split(',') + if len(labels) > 0: + segs, _ = segs_nodes.SEGSLabelFilter.filter(segs, labels) + + return (segs, ) + + +class SegmDetectorForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segm_detector": ("SEGM_DETECTOR", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + "labels": ("STRING", {"multiline": True, "default": "all", "placeholder": "List the types of segments to be allowed, separated by commas"}), + }, + "optional": {"detailer_hook": ("DETAILER_HOOK",), } + } + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + def doit(self, segm_detector, image, threshold, dilation, crop_factor, drop_size, labels=None, detailer_hook=None): + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: SegmDetectorForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + segs = segm_detector.detect(image, threshold, dilation, crop_factor, drop_size, detailer_hook) + + if labels is not None and labels != '': + labels = labels.split(',') + if len(labels) > 0: + segs, _ = segs_nodes.SEGSLabelFilter.filter(segs, labels) + + return (segs, ) + + +class SegmDetectorCombined: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segm_detector": ("SEGM_DETECTOR", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + def doit(self, segm_detector, image, threshold, dilation): + mask = segm_detector.detect_combined(image, threshold, dilation) + return (mask,) + + +class BboxDetectorCombined(SegmDetectorCombined): + @classmethod + def INPUT_TYPES(s): + return {"required": { + "bbox_detector": ("BBOX_DETECTOR", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 4, "min": -512, "max": 512, "step": 1}), + } + } + + def doit(self, bbox_detector, image, threshold, dilation): + mask = bbox_detector.detect_combined(image, threshold, dilation) + return (mask,) + + +class SimpleDetectorForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "bbox_detector": ("BBOX_DETECTOR", ), + "image": ("IMAGE", ), + + "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + + "sub_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "sub_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "sub_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + + "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "optional": { + "post_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "sam_model_opt": ("SAM_MODEL", ), + "segm_detector_opt": ("SEGM_DETECTOR", ), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + @staticmethod + def detect(bbox_detector, image, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, + sam_mask_hint_threshold, post_dilation=0, sam_model_opt=None, segm_detector_opt=None, + detailer_hook=None): + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: SimpleDetectorForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, crop_factor, drop_size, detailer_hook=detailer_hook) + + if sam_model_opt is not None: + mask = core.make_sam_mask(sam_model_opt, segs, image, "center-1", sub_dilation, + sub_threshold, sub_bbox_expansion, sam_mask_hint_threshold, False) + segs = core.segs_bitwise_and_mask(segs, mask) + elif segm_detector_opt is not None: + segm_segs = segm_detector_opt.detect(image, sub_threshold, sub_dilation, crop_factor, drop_size, detailer_hook=detailer_hook) + mask = core.segs_to_combined_mask(segm_segs) + segs = core.segs_bitwise_and_mask(segs, mask) + + segs = core.dilate_segs(segs, post_dilation) + + return (segs,) + + def doit(self, bbox_detector, image, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, + sam_mask_hint_threshold, post_dilation=0, sam_model_opt=None, segm_detector_opt=None): + + return SimpleDetectorForEach.detect(bbox_detector, image, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, + sam_mask_hint_threshold, post_dilation=post_dilation, + sam_model_opt=sam_model_opt, segm_detector_opt=segm_detector_opt) + + +class SimpleDetectorForEachPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "detailer_pipe": ("DETAILER_PIPE", ), + "image": ("IMAGE", ), + + "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + + "sub_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "sub_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "sub_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + + "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "optional": { + "post_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + def doit(self, detailer_pipe, image, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, sam_mask_hint_threshold, post_dilation=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: SimpleDetectorForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, refiner_model, refiner_clip, refiner_positive, refiner_negative = detailer_pipe + + return SimpleDetectorForEach.detect(bbox_detector, image, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, + sam_mask_hint_threshold, post_dilation=post_dilation, sam_model_opt=sam_model_opt, segm_detector_opt=segm_detector_opt, + detailer_hook=detailer_hook) + + +class SimpleDetectorForAnimateDiff: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "bbox_detector": ("BBOX_DETECTOR", ), + "image_frames": ("IMAGE", ), + + "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_dilation": ("INT", {"default": 0, "min": -255, "max": 255, "step": 1}), + + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + + "sub_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "sub_dilation": ("INT", {"default": 0, "min": -255, "max": 255, "step": 1}), + "sub_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + + "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "optional": { + "masking_mode": (["Pivot SEGS", "Combine neighboring frames", "Don't combine"],), + "segs_pivot": (["Combined mask", "1st frame mask"],), + "sam_model_opt": ("SAM_MODEL", ), + "segm_detector_opt": ("SEGM_DETECTOR", ), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + @staticmethod + def detect(bbox_detector, image_frames, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, sam_mask_hint_threshold, + masking_mode="Pivot SEGS", segs_pivot="Combined mask", sam_model_opt=None, segm_detector_opt=None): + + h = image_frames.shape[1] + w = image_frames.shape[2] + + # gather segs for all frames + segs_by_frames = [] + for image in image_frames: + image = image.unsqueeze(0) + segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, crop_factor, drop_size) + + if sam_model_opt is not None: + mask = core.make_sam_mask(sam_model_opt, segs, image, "center-1", sub_dilation, + sub_threshold, sub_bbox_expansion, sam_mask_hint_threshold, False) + segs = core.segs_bitwise_and_mask(segs, mask) + elif segm_detector_opt is not None: + segm_segs = segm_detector_opt.detect(image, sub_threshold, sub_dilation, crop_factor, drop_size) + mask = core.segs_to_combined_mask(segm_segs) + segs = core.segs_bitwise_and_mask(segs, mask) + + segs_by_frames.append(segs) + + def get_masked_frames(): + masks_by_frame = [] + for i, segs in enumerate(segs_by_frames): + masks_in_frame = segs_nodes.SEGSToMaskList().doit(segs)[0] + current_frame_mask = (masks_in_frame[0] * 255).to(torch.uint8) + + for mask in masks_in_frame[1:]: + current_frame_mask |= (mask * 255).to(torch.uint8) + + current_frame_mask = (current_frame_mask/255.0).to(torch.float32) + current_frame_mask = utils.to_binary_mask(current_frame_mask, 0.1)[0] + + masks_by_frame.append(current_frame_mask) + + return masks_by_frame + + def get_empty_mask(): + return torch.zeros((h, w), dtype=torch.float32, device="cpu") + + def get_neighboring_mask_at(i, masks_by_frame): + prv = masks_by_frame[i-1] if i > 1 else get_empty_mask() + cur = masks_by_frame[i] + nxt = masks_by_frame[i-1] if i > 1 else get_empty_mask() + + prv = prv if prv is not None else get_empty_mask() + cur = cur.clone() if cur is not None else get_empty_mask() + nxt = nxt if nxt is not None else get_empty_mask() + + return prv, cur, nxt + + def get_merged_neighboring_mask(masks_by_frame): + if len(masks_by_frame) <= 1: + return masks_by_frame + + result = [] + for i in range(0, len(masks_by_frame)): + prv, cur, nxt = get_neighboring_mask_at(i, masks_by_frame) + cur = (cur * 255).to(torch.uint8) + cur |= (prv * 255).to(torch.uint8) + cur |= (nxt * 255).to(torch.uint8) + cur = (cur / 255.0).to(torch.float32) + cur = utils.to_binary_mask(cur, 0.1)[0] + result.append(cur) + + return result + + def get_whole_merged_mask(): + all_masks = [] + for segs in segs_by_frames: + all_masks += segs_nodes.SEGSToMaskList().doit(segs)[0] + + merged_mask = (all_masks[0] * 255).to(torch.uint8) + for mask in all_masks[1:]: + merged_mask |= (mask * 255).to(torch.uint8) + + merged_mask = (merged_mask / 255.0).to(torch.float32) + merged_mask = utils.to_binary_mask(merged_mask, 0.1)[0] + return merged_mask + + def get_pivot_segs(): + if segs_pivot == "1st frame mask": + return segs_by_frames[0][1] + else: + merged_mask = get_whole_merged_mask() + return segs_nodes.MaskToSEGS().doit(merged_mask, False, crop_factor, False, drop_size, contour_fill=True)[0] + + def get_merged_neighboring_segs(): + pivot_segs = get_pivot_segs() + + masks_by_frame = get_masked_frames() + masks_by_frame = get_merged_neighboring_mask(masks_by_frame) + + new_segs = [] + for seg in pivot_segs[1]: + cropped_mask = torch.zeros(seg.cropped_mask.shape, dtype=torch.float32, device="cpu").unsqueeze(0) + pivot_mask = torch.from_numpy(seg.cropped_mask) + x1, y1, x2, y2 = seg.crop_region + for mask in masks_by_frame: + cropped_mask_at_frame = (mask[y1:y2, x1:x2] * pivot_mask).unsqueeze(0) + cropped_mask = torch.cat((cropped_mask, cropped_mask_at_frame), dim=0) + + if len(cropped_mask) > 1: + cropped_mask = cropped_mask[1:] + + new_seg = SEG(seg.cropped_image, cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + new_segs.append(new_seg) + + return pivot_segs[0], new_segs + + def get_separated_segs(): + pivot_segs = get_pivot_segs() + + masks_by_frame = get_masked_frames() + + new_segs = [] + for seg in pivot_segs[1]: + cropped_mask = torch.zeros(seg.cropped_mask.shape, dtype=torch.float32, device="cpu").unsqueeze(0) + x1, y1, x2, y2 = seg.crop_region + for mask in masks_by_frame: + cropped_mask_at_frame = mask[y1:y2, x1:x2] + cropped_mask = torch.cat((cropped_mask, cropped_mask_at_frame), dim=0) + + new_seg = SEG(seg.cropped_image, cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + new_segs.append(new_seg) + + return pivot_segs[0], new_segs + + # create result mask + if masking_mode == "Pivot SEGS": + return (get_pivot_segs(), ) + + elif masking_mode == "Combine neighboring frames": + return (get_merged_neighboring_segs(), ) + + else: # elif masking_mode == "Don't combine": + return (get_separated_segs(), ) + + def doit(self, bbox_detector, image_frames, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, sam_mask_hint_threshold, + masking_mode="Pivot SEGS", segs_pivot="Combined mask", sam_model_opt=None, segm_detector_opt=None): + + return SimpleDetectorForAnimateDiff.detect(bbox_detector, image_frames, bbox_threshold, bbox_dilation, crop_factor, drop_size, + sub_threshold, sub_dilation, sub_bbox_expansion, sam_mask_hint_threshold, + masking_mode, segs_pivot, sam_model_opt, segm_detector_opt) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hf_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hf_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..227ef3c867eace749cce890c1ac079edd971623b --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hf_nodes.py @@ -0,0 +1,183 @@ +import comfy +import re +from impact.utils import * + +hf_transformer_model_urls = [ + "rizvandwiki/gender-classification-2", + "NTQAI/pedestrian_gender_recognition", + "Leilab/gender_class", + "ProjectPersonal/GenderClassifier", + "crangana/trained-gender", + "cledoux42/GenderNew_v002", + "ivensamdh/genderage2" +] + + +class HF_TransformersClassifierProvider: + @classmethod + def INPUT_TYPES(s): + global hf_transformer_model_urls + return {"required": { + "preset_repo_id": (hf_transformer_model_urls + ['Manual repo id'],), + "manual_repo_id": ("STRING", {"multiline": False}), + "device_mode": (["AUTO", "Prefer GPU", "CPU"],), + }, + } + + RETURN_TYPES = ("TRANSFORMERS_CLASSIFIER",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/HuggingFace" + + def doit(self, preset_repo_id, manual_repo_id, device_mode): + from transformers import pipeline + + if preset_repo_id == 'Manual repo id': + url = manual_repo_id + else: + url = preset_repo_id + + if device_mode != 'CPU': + device = comfy.model_management.get_torch_device() + else: + device = "cpu" + + classifier = pipeline(model=url, device=device) + + return (classifier,) + + +preset_classify_expr = [ + '#Female > #Male', + '#Female < #Male', + 'female > 0.5', + 'male > 0.5', + 'Age16to25 > 0.1', + 'Age50to69 > 0.1', +] + +symbolic_label_map = { + '#Female': {'female', 'Female', 'Human Female', 'woman', 'women', 'girl'}, + '#Male': {'male', 'Male', 'Human Male', 'man', 'men', 'boy'} +} + +def is_numeric_string(input_str): + return re.match(r'^-?\d+(\.\d+)?$', input_str) is not None + + +classify_expr_pattern = r'([^><= ]+)\s*(>|<|>=|<=|=)\s*([^><= ]+)' + + +class SEGS_Classify: + @classmethod + def INPUT_TYPES(s): + global preset_classify_expr + return {"required": { + "classifier": ("TRANSFORMERS_CLASSIFIER",), + "segs": ("SEGS",), + "preset_expr": (preset_classify_expr + ['Manual expr'],), + "manual_expr": ("STRING", {"multiline": False}), + }, + "optional": { + "ref_image_opt": ("IMAGE", ), + } + } + + RETURN_TYPES = ("SEGS", "SEGS",) + RETURN_NAMES = ("filtered_SEGS", "remained_SEGS",) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/HuggingFace" + + @staticmethod + def lookup_classified_label_score(score_infos, label): + global symbolic_label_map + + if label.startswith('#'): + if label not in symbolic_label_map: + return None + else: + label = symbolic_label_map[label] + else: + label = {label} + + for x in score_infos: + if x['label'] in label: + return x['score'] + + return None + + def doit(self, classifier, segs, preset_expr, manual_expr, ref_image_opt=None): + if preset_expr == 'Manual expr': + expr_str = manual_expr + else: + expr_str = preset_expr + + match = re.match(classify_expr_pattern, expr_str) + + if match is None: + return ((segs[0], []), segs) + + a = match.group(1) + op = match.group(2) + b = match.group(3) + + a_is_lab = not is_numeric_string(a) + b_is_lab = not is_numeric_string(b) + + classified = [] + remained_SEGS = [] + + for seg in segs[1]: + cropped_image = None + + if seg.cropped_image is not None: + cropped_image = seg.cropped_image + elif ref_image_opt is not None: + # take from original image + cropped_image = crop_image(ref_image_opt, seg.crop_region) + + if cropped_image is not None: + cropped_image = to_pil(cropped_image) + res = classifier(cropped_image) + classified.append((seg, res)) + else: + remained_SEGS.append(seg) + + filtered_SEGS = [] + for seg, res in classified: + if a_is_lab: + avalue = SEGS_Classify.lookup_classified_label_score(res, a) + else: + avalue = a + + if b_is_lab: + bvalue = SEGS_Classify.lookup_classified_label_score(res, b) + else: + bvalue = b + + if avalue is None or bvalue is None: + remained_SEGS.append(seg) + continue + + avalue = float(avalue) + bvalue = float(bvalue) + + if op == '>': + cond = avalue > bvalue + elif op == '<': + cond = avalue < bvalue + elif op == '>=': + cond = avalue >= bvalue + elif op == '<=': + cond = avalue <= bvalue + else: + cond = avalue == bvalue + + if cond: + filtered_SEGS.append(seg) + else: + remained_SEGS.append(seg) + + return ((segs[0], filtered_SEGS), (segs[0], remained_SEGS)) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hook_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hook_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..c218da4bfa5132359e1f95277ed6db7107097f33 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hook_nodes.py @@ -0,0 +1,83 @@ +import sys +from . import hooks +from . import defs + + +class SEGSOrderedFilterDetailerHookProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "target": (["area(=w*h)", "width", "height", "x1", "y1", "x2", "y2"],), + "order": ("BOOLEAN", {"default": True, "label_on": "descending", "label_off": "ascending"}), + "take_start": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "take_count": ("INT", {"default": 1, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + RETURN_TYPES = ("DETAILER_HOOK", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, target, order, take_start, take_count): + hook = hooks.SEGSOrderedFilterDetailerHook(target, order, take_start, take_count) + return (hook, ) + + +class SEGSRangeFilterDetailerHookProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "target": (["area(=w*h)", "width", "height", "x1", "y1", "x2", "y2", "length_percent"],), + "mode": ("BOOLEAN", {"default": True, "label_on": "inside", "label_off": "outside"}), + "min_value": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "max_value": ("INT", {"default": 67108864, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + RETURN_TYPES = ("DETAILER_HOOK", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, target, mode, min_value, max_value): + hook = hooks.SEGSRangeFilterDetailerHook(target, mode, min_value, max_value) + return (hook, ) + + +class SEGSLabelFilterDetailerHookProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "preset": (['all'] + defs.detection_labels,), + "labels": ("STRING", {"multiline": True, "placeholder": "List the types of segments to be allowed, separated by commas"}), + }, + } + + RETURN_TYPES = ("DETAILER_HOOK", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, preset, labels): + hook = hooks.SEGSLabelFilterDetailerHook(labels) + return (hook, ) + + +class PreviewDetailerHookProvider: + @classmethod + def INPUT_TYPES(s): + return { + "required": {"quality": ("INT", {"default": 95, "min": 20, "max": 100})}, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("DETAILER_HOOK", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, quality, unique_id): + hook = hooks.PreviewDetailerHook(unique_id, quality) + return (hook, ) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hooks.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hooks.py new file mode 100644 index 0000000000000000000000000000000000000000..858c503769f2fd994dcd9120c575e02eb8dee872 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/hooks.py @@ -0,0 +1,477 @@ +import copy +import nodes + +from impact import utils +from . import segs_nodes +from thirdparty import noise_nodes +from server import PromptServer +import asyncio +import folder_paths +import os + +class PixelKSampleHook: + cur_step = 0 + total_step = 0 + + def __init__(self): + pass + + def set_steps(self, info): + self.cur_step, self.total_step = info + + def post_decode(self, pixels): + return pixels + + def post_upscale(self, pixels): + return pixels + + def post_encode(self, samples): + return samples + + def pre_decode(self, samples): + return samples + + def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, + denoise): + return model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise + + def post_crop_region(self, w, h, item_bbox, crop_region): + return crop_region + + def touch_scaled_size(self, w, h): + return w, h + + +class PixelKSampleHookCombine(PixelKSampleHook): + hook1 = None + hook2 = None + + def __init__(self, hook1, hook2): + super().__init__() + self.hook1 = hook1 + self.hook2 = hook2 + + def set_steps(self, info): + self.hook1.set_steps(info) + self.hook2.set_steps(info) + + def pre_decode(self, samples): + return self.hook2.pre_decode(self.hook1.pre_decode(samples)) + + def post_decode(self, pixels): + return self.hook2.post_decode(self.hook1.post_decode(pixels)) + + def post_upscale(self, pixels): + return self.hook2.post_upscale(self.hook1.post_upscale(pixels)) + + def post_encode(self, samples): + return self.hook2.post_encode(self.hook1.post_encode(samples)) + + def post_crop_region(self, w, h, item_bbox, crop_region): + crop_region = self.hook1.post_crop_region(w, h, item_bbox, crop_region) + return self.hook2.post_crop_region(w, h, item_bbox, crop_region) + + def touch_scaled_size(self, w, h): + w, h = self.hook1.touch_scaled_size(w, h) + return self.hook2.touch_scaled_size(w, h) + + def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, + denoise): + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \ + self.hook1.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, + upscaled_latent, denoise) + + return self.hook2.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, + upscaled_latent, denoise) + + +class DetailerHookCombine(PixelKSampleHookCombine): + def cycle_latent(self, latent): + latent = self.hook1.cycle_latent(latent) + latent = self.hook2.cycle_latent(latent) + return latent + + def post_detection(self, segs): + segs = self.hook1.post_detection(segs) + segs = self.hook2.post_detection(segs) + return segs + + def post_paste(self, image): + image = self.hook1.post_paste(image) + image = self.hook2.post_paste(image) + return image + + +class SimpleCfgScheduleHook(PixelKSampleHook): + target_cfg = 0 + + def __init__(self, target_cfg): + super().__init__() + self.target_cfg = target_cfg + + def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise): + if self.total_step > 1: + progress = self.cur_step / (self.total_step - 1) + gap = self.target_cfg - cfg + current_cfg = int(cfg + gap * progress) + else: + current_cfg = self.target_cfg + + return model, seed, steps, current_cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise + + +class SimpleDenoiseScheduleHook(PixelKSampleHook): + def __init__(self, target_denoise): + super().__init__() + self.target_denoise = target_denoise + + def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise): + if self.total_step > 1: + progress = self.cur_step / (self.total_step - 1) + gap = self.target_denoise - denoise + current_denoise = denoise + gap * progress + else: + current_denoise = self.target_denoise + + return model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, current_denoise + + +class SimpleStepsScheduleHook(PixelKSampleHook): + def __init__(self, target_steps): + super().__init__() + self.target_steps = target_steps + + def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise): + if self.total_step > 1: + progress = self.cur_step / (self.total_step - 1) + gap = self.target_steps - steps + current_steps = int(steps + gap * progress) + else: + current_steps = self.target_steps + + return model, seed, current_steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise + + +class DetailerHook(PixelKSampleHook): + def cycle_latent(self, latent): + return latent + + def post_detection(self, segs): + return segs + + def post_paste(self, image): + return image + + +class SimpleDetailerDenoiseSchedulerHook(DetailerHook): + def __init__(self, target_denoise): + super().__init__() + self.target_denoise = target_denoise + + def pre_ksample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent, denoise): + if self.total_step > 1: + progress = self.cur_step / (self.total_step - 1) + gap = self.target_denoise - denoise + current_denoise = denoise + gap * progress + else: + # ignore hook if total cycle <= 1 + current_denoise = denoise + + return model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent, current_denoise + + +class CoreMLHook(DetailerHook): + def __init__(self, mode): + super().__init__() + resolution = mode.split('x') + + self.w = int(resolution[0]) + self.h = int(resolution[1]) + + self.override_bbox_by_segm = False + + def pre_decode(self, samples): + new_samples = copy.deepcopy(samples) + new_samples['samples'] = samples['samples'][0].unsqueeze(0) + return new_samples + + def post_encode(self, samples): + new_samples = copy.deepcopy(samples) + new_samples['samples'] = samples['samples'].repeat(2, 1, 1, 1) + return new_samples + + def post_crop_region(self, w, h, item_bbox, crop_region): + x1, y1, x2, y2 = crop_region + bx1, by1, bx2, by2 = item_bbox + crop_w = x2-x1 + crop_h = y2-y1 + + crop_ratio = crop_w/crop_h + target_ratio = self.w/self.h + if crop_ratio < target_ratio: + # shrink height + top_gap = by1 - y1 + bottom_gap = y2 - by2 + + gap_ratio = top_gap / bottom_gap + + target_height = 1/target_ratio*crop_w + delta_height = crop_h - target_height + + new_y1 = int(y1 + delta_height*gap_ratio) + new_y2 = int(new_y1 + target_height) + crop_region = x1, new_y1, x2, new_y2 + + elif crop_ratio > target_ratio: + # shrink width + left_gap = bx1 - x1 + right_gap = x2 - bx2 + + gap_ratio = left_gap / right_gap + + target_width = target_ratio*crop_h + delta_width = crop_w - target_width + + new_x1 = int(x1 + delta_width*gap_ratio) + new_x2 = int(new_x1 + target_width) + crop_region = new_x1, y1, new_x2, y2 + + return crop_region + + def touch_scaled_size(self, w, h): + return self.w, self.h + + +# REQUIREMENTS: BlenderNeko/ComfyUI Noise +class InjectNoiseHook(PixelKSampleHook): + def __init__(self, source, seed, start_strength, end_strength): + super().__init__() + self.source = source + self.seed = seed + self.start_strength = start_strength + self.end_strength = end_strength + + def post_encode(self, samples): + cur_step = self.cur_step + + size = samples['samples'].shape + seed = cur_step + self.seed + cur_step + + if "BNK_NoisyLatentImage" in nodes.NODE_CLASS_MAPPINGS and "BNK_InjectNoise" in nodes.NODE_CLASS_MAPPINGS: + NoisyLatentImage = nodes.NODE_CLASS_MAPPINGS["BNK_NoisyLatentImage"] + InjectNoise = nodes.NODE_CLASS_MAPPINGS["BNK_InjectNoise"] + else: + utils.try_install_custom_node('https://github.com/BlenderNeko/ComfyUI_Noise', + "To use 'NoiseInjectionHookProvider', 'ComfyUI Noise' extension is required.") + raise Exception("'BNK_NoisyLatentImage', 'BNK_InjectNoise' nodes are not installed.") + + noise = NoisyLatentImage().create_noisy_latents(self.source, seed, size[3] * 8, size[2] * 8, size[0])[0] + + # inj noise + mask = None + if 'noise_mask' in samples: + mask = samples['noise_mask'] + + strength = self.start_strength + (self.end_strength - self.start_strength) * cur_step / self.total_step + samples = InjectNoise().inject_noise(samples, strength, noise, mask)[0] + print(f"[Impact Pack] InjectNoiseHook: strength = {strength}") + + if mask is not None: + samples['noise_mask'] = mask + + return samples + + +class UnsamplerHook(PixelKSampleHook): + def __init__(self, model, steps, start_end_at_step, end_end_at_step, cfg, sampler_name, + scheduler, normalize, positive, negative): + super().__init__() + self.model = model + self.cfg = cfg + self.sampler_name = sampler_name + self.steps = steps + self.start_end_at_step = start_end_at_step + self.end_end_at_step = end_end_at_step + self.scheduler = scheduler + self.normalize = normalize + self.positive = positive + self.negative = negative + + def post_encode(self, samples): + cur_step = self.cur_step + + Unsampler = noise_nodes.Unsampler + + end_at_step = self.start_end_at_step + (self.end_end_at_step - self.start_end_at_step) * cur_step / self.total_step + end_at_step = int(end_at_step) + + print(f"[Impact Pack] UnsamplerHook: end_at_step = {end_at_step}") + + # inj noise + mask = None + if 'noise_mask' in samples: + mask = samples['noise_mask'] + + samples = Unsampler().unsampler(self.model, self.cfg, self.sampler_name, self.steps, end_at_step, + self.scheduler, self.normalize, self.positive, self.negative, samples)[0] + + if mask is not None: + samples['noise_mask'] = mask + + return samples + + +class InjectNoiseHookForDetailer(DetailerHook): + def __init__(self, source, seed, start_strength, end_strength, from_start=False): + super().__init__() + self.source = source + self.seed = seed + self.start_strength = start_strength + self.end_strength = end_strength + self.from_start = from_start + + def inject_noise(self, samples): + cur_step = self.cur_step if self.from_start else self.cur_step - 1 + total_step = self.total_step if self.from_start else self.total_step - 1 + + size = samples['samples'].shape + seed = cur_step + self.seed + cur_step + + if "BNK_NoisyLatentImage" in nodes.NODE_CLASS_MAPPINGS and "BNK_InjectNoise" in nodes.NODE_CLASS_MAPPINGS: + NoisyLatentImage = nodes.NODE_CLASS_MAPPINGS["BNK_NoisyLatentImage"] + InjectNoise = nodes.NODE_CLASS_MAPPINGS["BNK_InjectNoise"] + else: + utils.try_install_custom_node('https://github.com/BlenderNeko/ComfyUI_Noise', + "To use 'NoiseInjectionDetailerHookProvider', 'ComfyUI Noise' extension is required.") + raise Exception("'BNK_NoisyLatentImage', 'BNK_InjectNoise' nodes are not installed.") + + noise = NoisyLatentImage().create_noisy_latents(self.source, seed, size[3] * 8, size[2] * 8, size[0])[0] + + # inj noise + mask = None + if 'noise_mask' in samples: + mask = samples['noise_mask'] + + strength = self.start_strength + (self.end_strength - self.start_strength) * cur_step / total_step + samples = InjectNoise().inject_noise(samples, strength, noise, mask)[0] + + if mask is not None: + samples['noise_mask'] = mask + + return samples + + def cycle_latent(self, latent): + if self.cur_step == 0 and not self.from_start: + return latent + else: + return self.inject_noise(latent) + + +class UnsamplerDetailerHook(DetailerHook): + def __init__(self, model, steps, start_end_at_step, end_end_at_step, cfg, sampler_name, + scheduler, normalize, positive, negative, from_start=False): + super().__init__() + self.model = model + self.cfg = cfg + self.sampler_name = sampler_name + self.steps = steps + self.start_end_at_step = start_end_at_step + self.end_end_at_step = end_end_at_step + self.scheduler = scheduler + self.normalize = normalize + self.positive = positive + self.negative = negative + self.from_start = from_start + + def unsample(self, samples): + cur_step = self.cur_step if self.from_start else self.cur_step - 1 + total_step = self.total_step if self.from_start else self.total_step - 1 + + Unsampler = noise_nodes.Unsampler + + end_at_step = self.start_end_at_step + (self.end_end_at_step - self.start_end_at_step) * cur_step / total_step + end_at_step = int(end_at_step) + + # inj noise + mask = None + if 'noise_mask' in samples: + mask = samples['noise_mask'] + + samples = Unsampler().unsampler(self.model, self.cfg, self.sampler_name, self.steps, end_at_step, + self.scheduler, self.normalize, self.positive, self.negative, samples)[0] + + if mask is not None: + samples['noise_mask'] = mask + + return samples + + def cycle_latent(self, latent): + if self.cur_step == 0 and not self.from_start: + return latent + else: + return self.unsample(latent) + + +class SEGSOrderedFilterDetailerHook(DetailerHook): + def __init__(self, target, order, take_start, take_count): + super().__init__() + self.target = target + self.order = order + self.take_start = take_start + self.take_count = take_count + + def post_detection(self, segs): + return segs_nodes.SEGSOrderedFilter().doit(segs, self.target, self.order, self.take_start, self.take_count)[0] + + +class SEGSRangeFilterDetailerHook(DetailerHook): + def __init__(self, target, mode, min_value, max_value): + super().__init__() + self.target = target + self.mode = mode + self.min_value = min_value + self.max_value = max_value + + def post_detection(self, segs): + return segs_nodes.SEGSRangeFilter().doit(segs, self.target, self.mode, self.min_value, self.max_value)[0] + + +class SEGSLabelFilterDetailerHook(DetailerHook): + def __init__(self, labels): + super().__init__() + self.labels = labels + + def post_detection(self, segs): + return segs_nodes.SEGSLabelFilter().doit(segs, "", self.labels)[0] + + +class PreviewDetailerHook(DetailerHook): + def __init__(self, node_id, quality): + super().__init__() + self.node_id = node_id + self.quality = quality + + async def send(self, image): + if len(image) > 0: + image = image[0].unsqueeze(0) + img = utils.tensor2pil(image) + + temp_path = os.path.join(folder_paths.get_temp_directory(), 'pvhook') + + if not os.path.exists(temp_path): + os.makedirs(temp_path) + + fullpath = os.path.join(temp_path, f"{self.node_id}.webp") + img.save(fullpath, quality=self.quality) + + item = { + "filename": f"{self.node_id}.webp", + "subfolder": 'pvhook', + "type": 'temp' + } + + PromptServer.instance.send_sync("impact-preview", {'node_id': self.node_id, 'item': item}) + + def post_paste(self, image): + asyncio.run(self.send(image)) + return image diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_pack.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_pack.py new file mode 100644 index 0000000000000000000000000000000000000000..1725c65620353efbcf58a14429cb842ac4865891 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_pack.py @@ -0,0 +1,2106 @@ +import os +import sys + +import comfy.samplers +import comfy.sd +import warnings +from segment_anything import sam_model_registry +from io import BytesIO +import piexif +import zipfile +import re + +import impact.wildcards + +from impact.utils import * +import impact.core as core +from impact.core import SEG +from impact.config import MAX_RESOLUTION, latent_letter_path +from PIL import Image, ImageOps +import numpy as np +import hashlib +import json +import safetensors.torch +from PIL.PngImagePlugin import PngInfo +import comfy.model_management +import base64 +import impact.wildcards as wildcards +from . import hooks + +warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated') + +model_path = folder_paths.models_dir + + +# folder_paths.supported_pt_extensions +add_folder_path_and_extensions("mmdets_bbox", [os.path.join(model_path, "mmdets", "bbox")], folder_paths.supported_pt_extensions) +add_folder_path_and_extensions("mmdets_segm", [os.path.join(model_path, "mmdets", "segm")], folder_paths.supported_pt_extensions) +add_folder_path_and_extensions("mmdets", [os.path.join(model_path, "mmdets")], folder_paths.supported_pt_extensions) +add_folder_path_and_extensions("sams", [os.path.join(model_path, "sams")], folder_paths.supported_pt_extensions) +add_folder_path_and_extensions("onnx", [os.path.join(model_path, "onnx")], {'.onnx'}) + + +# Nodes +class ONNXDetectorProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": {"model_name": (folder_paths.get_filename_list("onnx"), )}} + + RETURN_TYPES = ("BBOX_DETECTOR", ) + FUNCTION = "load_onnx" + + CATEGORY = "ImpactPack" + + def load_onnx(self, model_name): + model = folder_paths.get_full_path("onnx", model_name) + return (core.ONNXDetector(model), ) + + +class CLIPSegDetectorProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "text": ("STRING", {"multiline": False}), + "blur": ("FLOAT", {"min": 0, "max": 15, "step": 0.1, "default": 7}), + "threshold": ("FLOAT", {"min": 0, "max": 1, "step": 0.05, "default": 0.4}), + "dilation_factor": ("INT", {"min": 0, "max": 10, "step": 1, "default": 4}), + } + } + + RETURN_TYPES = ("BBOX_DETECTOR", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, text, blur, threshold, dilation_factor): + if "CLIPSeg" in nodes.NODE_CLASS_MAPPINGS: + return (core.BBoxDetectorBasedOnCLIPSeg(text, blur, threshold, dilation_factor), ) + else: + print("[ERROR] CLIPSegToBboxDetector: CLIPSeg custom node isn't installed. You must install biegert/ComfyUI-CLIPSeg extension to use this node.") + + +class SAMLoader: + @classmethod + def INPUT_TYPES(cls): + models = [x for x in folder_paths.get_filename_list("sams") if 'hq' not in x] + return { + "required": { + "model_name": (models, ), + "device_mode": (["AUTO", "Prefer GPU", "CPU"],), + } + } + + RETURN_TYPES = ("SAM_MODEL", ) + FUNCTION = "load_model" + + CATEGORY = "ImpactPack" + + def load_model(self, model_name, device_mode="auto"): + modelname = folder_paths.get_full_path("sams", model_name) + + if 'vit_h' in model_name: + model_kind = 'vit_h' + elif 'vit_l' in model_name: + model_kind = 'vit_l' + else: + model_kind = 'vit_b' + + sam = sam_model_registry[model_kind](checkpoint=modelname) + size = os.path.getsize(modelname) + sam.safe_to = core.SafeToGPU(size) + + # Unless user explicitly wants to use CPU, we use GPU + device = comfy.model_management.get_torch_device() if device_mode == "Prefer GPU" else "CPU" + + if device_mode == "Prefer GPU": + sam.safe_to.to_device(sam, device) + + sam.is_auto_mode = device_mode == "AUTO" + + print(f"Loads SAM model: {modelname} (device:{device_mode})") + return (sam, ) + + +class ONNXDetectorForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "onnx_detector": ("ONNX_DETECTOR",), + "image": ("IMAGE",), + "threshold": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + "crop_factor": ("FLOAT", {"default": 1.0, "min": 0.5, "max": 100, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + } + } + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detector" + + OUTPUT_NODE = True + + def doit(self, onnx_detector, image, threshold, dilation, crop_factor, drop_size): + segs = onnx_detector.detect(image, threshold, dilation, crop_factor, drop_size) + return (segs, ) + + +class DetailerForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "segs": ("SEGS", ), + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "guide_size": ("FLOAT", {"default": 384, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 1024, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "noise_mask": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "force_inpaint": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + + "cycle": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), + }, + "optional": { + "detailer_hook": ("DETAILER_HOOK",), + "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + @staticmethod + def do_detail(image, segs, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, feather, noise_mask, force_inpaint, wildcard_opt=None, detailer_hook=None, + refiner_ratio=None, refiner_model=None, refiner_clip=None, refiner_positive=None, refiner_negative=None, + cycle=1, inpaint_model=False, noise_mask_feather=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: DetailerForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + image = image.clone() + enhanced_alpha_list = [] + enhanced_list = [] + cropped_list = [] + cnet_pil_list = [] + + segs = core.segs_scale_match(segs, image.shape) + new_segs = [] + + wildcard_concat_mode = None + if wildcard_opt is not None: + if wildcard_opt.startswith('[CONCAT]'): + wildcard_concat_mode = 'concat' + wildcard_opt = wildcard_opt[8:] + wmode, wildcard_chooser = wildcards.process_wildcard_for_segs(wildcard_opt) + else: + wmode, wildcard_chooser = None, None + + if wmode in ['ASC', 'DSC']: + if wmode == 'ASC': + ordered_segs = sorted(segs[1], key=lambda x: (x.bbox[0], x.bbox[1])) + else: + ordered_segs = sorted(segs[1], key=lambda x: (x.bbox[0], x.bbox[1]), reverse=True) + else: + ordered_segs = segs[1] + + for i, seg in enumerate(ordered_segs): + cropped_image = seg.cropped_image if seg.cropped_image is not None \ + else crop_ndarray4(image.numpy(), seg.crop_region) + cropped_image = to_tensor(cropped_image) + mask = to_tensor(seg.cropped_mask) + mask = tensor_gaussian_blur_mask(mask, feather) + + is_mask_all_zeros = (seg.cropped_mask == 0).all().item() + if is_mask_all_zeros: + print(f"Detailer: segment skip [empty mask]") + continue + + if noise_mask: + cropped_mask = seg.cropped_mask + else: + cropped_mask = None + + if wildcard_chooser is not None and wmode != "LAB": + seg_seed, wildcard_item = wildcard_chooser.get(seg) + elif wildcard_chooser is not None and wmode == "LAB": + seg_seed, wildcard_item = None, wildcard_chooser.get(seg) + else: + seg_seed, wildcard_item = None, None + + seg_seed = seed + i if seg_seed is None else seg_seed + + enhanced_image, cnet_pils = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for_bbox, max_size, + seg.bbox, seg_seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, cropped_mask, force_inpaint, + wildcard_opt=wildcard_item, wildcard_opt_concat_mode=wildcard_concat_mode, + detailer_hook=detailer_hook, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, + refiner_negative=refiner_negative, control_net_wrapper=seg.control_net_wrapper, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + if cnet_pils is not None: + cnet_pil_list.extend(cnet_pils) + + if not (enhanced_image is None): + # don't latent composite-> converting to latent caused poor quality + # use image paste + image = image.cpu() + enhanced_image = enhanced_image.cpu() + tensor_paste(image, enhanced_image, (seg.crop_region[0], seg.crop_region[1]), mask) + enhanced_list.append(enhanced_image) + + if detailer_hook is not None: + detailer_hook.post_paste(image) + + if not (enhanced_image is None): + # Convert enhanced_pil_alpha to RGBA mode + enhanced_image_alpha = tensor_convert_rgba(enhanced_image) + new_seg_image = enhanced_image.numpy() # alpha should not be applied to seg_image + + # Apply the mask + mask = tensor_resize(mask, *tensor_get_size(enhanced_image)) + tensor_putalpha(enhanced_image_alpha, mask) + enhanced_alpha_list.append(enhanced_image_alpha) + else: + new_seg_image = None + + cropped_list.append(cropped_image) + + new_seg = SEG(new_seg_image, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + new_segs.append(new_seg) + + image_tensor = tensor_convert_rgb(image) + + cropped_list.sort(key=lambda x: x.shape, reverse=True) + enhanced_list.sort(key=lambda x: x.shape, reverse=True) + enhanced_alpha_list.sort(key=lambda x: x.shape, reverse=True) + + return image_tensor, cropped_list, enhanced_list, enhanced_alpha_list, cnet_pil_list, (segs[0], new_segs) + + def doit(self, image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, + scheduler, positive, negative, denoise, feather, noise_mask, force_inpaint, wildcard, cycle=1, + detailer_hook=None, inpaint_model=False, noise_mask_feather=0): + + enhanced_img, *_ = \ + DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, + cfg, sampler_name, scheduler, positive, negative, denoise, feather, noise_mask, + force_inpaint, wildcard, detailer_hook, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + return (enhanced_img, ) + + +class DetailerForEachPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "segs": ("SEGS", ), + "guide_size": ("FLOAT", {"default": 384, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 1024, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "noise_mask": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "force_inpaint": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "basic_pipe": ("BASIC_PIPE", ), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "refiner_ratio": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0}), + + "cycle": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), + }, + "optional": { + "detailer_hook": ("DETAILER_HOOK",), + "refiner_basic_pipe_opt": ("BASIC_PIPE",), + "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", "SEGS", "BASIC_PIPE", "IMAGE") + RETURN_NAMES = ("image", "segs", "basic_pipe", "cnet_images") + OUTPUT_IS_LIST = (False, False, False, True) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, image, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, feather, noise_mask, force_inpaint, basic_pipe, wildcard, + refiner_ratio=None, detailer_hook=None, refiner_basic_pipe_opt=None, + cycle=1, inpaint_model=False, noise_mask_feather=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: DetailerForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + model, clip, vae, positive, negative = basic_pipe + + if refiner_basic_pipe_opt is None: + refiner_model, refiner_clip, refiner_positive, refiner_negative = None, None, None, None + else: + refiner_model, refiner_clip, _, refiner_positive, refiner_negative = refiner_basic_pipe_opt + + enhanced_img, cropped, cropped_enhanced, cropped_enhanced_alpha, cnet_pil_list, new_segs = \ + DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg, + sampler_name, scheduler, positive, negative, denoise, feather, noise_mask, + force_inpaint, wildcard, detailer_hook, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, refiner_negative=refiner_negative, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + # set fallback image + if len(cnet_pil_list) == 0: + cnet_pil_list = [empty_pil_tensor()] + + return (enhanced_img, new_segs, basic_pipe, cnet_pil_list) + + +class FaceDetailer: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "guide_size": ("FLOAT", {"default": 384, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 1024, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "noise_mask": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "force_inpaint": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + + "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + "bbox_crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}), + + "sam_detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area", "mask-points", "mask-point-bbox", "none"],), + "sam_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "sam_threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}), + "sam_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + "sam_mask_hint_use_negative": (["False", "Small", "Outter"],), + + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + + "bbox_detector": ("BBOX_DETECTOR", ), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + + "cycle": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), + }, + "optional": { + "sam_model_opt": ("SAM_MODEL", ), + "segm_detector_opt": ("SEGM_DETECTOR", ), + "detailer_hook": ("DETAILER_HOOK",), + "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + }} + + RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "MASK", "DETAILER_PIPE", "IMAGE") + RETURN_NAMES = ("image", "cropped_refined", "cropped_enhanced_alpha", "mask", "detailer_pipe", "cnet_images") + OUTPUT_IS_LIST = (False, True, True, False, False, True) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Simple" + + @staticmethod + def enhance_face(image, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, feather, noise_mask, force_inpaint, + bbox_threshold, bbox_dilation, bbox_crop_factor, + sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold, + sam_mask_hint_use_negative, drop_size, + bbox_detector, segm_detector=None, sam_model_opt=None, wildcard_opt=None, detailer_hook=None, + refiner_ratio=None, refiner_model=None, refiner_clip=None, refiner_positive=None, refiner_negative=None, cycle=1, + inpaint_model=False, noise_mask_feather=0): + + # make default prompt as 'face' if empty prompt for CLIPSeg + bbox_detector.setAux('face') + segs = bbox_detector.detect(image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size, detailer_hook=detailer_hook) + bbox_detector.setAux(None) + + # bbox + sam combination + if sam_model_opt is not None: + sam_mask = core.make_sam_mask(sam_model_opt, segs, image, sam_detection_hint, sam_dilation, + sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold, + sam_mask_hint_use_negative, ) + segs = core.segs_bitwise_and_mask(segs, sam_mask) + + elif segm_detector is not None: + segm_segs = segm_detector.detect(image, bbox_threshold, bbox_dilation, bbox_crop_factor, drop_size) + + if (hasattr(segm_detector, 'override_bbox_by_segm') and segm_detector.override_bbox_by_segm and + not (detailer_hook is not None and not hasattr(detailer_hook, 'override_bbox_by_segm'))): + segs = segm_segs + else: + segm_mask = core.segs_to_combined_mask(segm_segs) + segs = core.segs_bitwise_and_mask(segs, segm_mask) + + if len(segs[1]) > 0: + enhanced_img, _, cropped_enhanced, cropped_enhanced_alpha, cnet_pil_list, new_segs = \ + DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg, + sampler_name, scheduler, positive, negative, denoise, feather, noise_mask, + force_inpaint, wildcard_opt, detailer_hook, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, + refiner_negative=refiner_negative, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + else: + enhanced_img = image + cropped_enhanced = [] + cropped_enhanced_alpha = [] + cnet_pil_list = [] + + # Mask Generator + mask = core.segs_to_combined_mask(segs) + + if len(cropped_enhanced) == 0: + cropped_enhanced = [empty_pil_tensor()] + + if len(cropped_enhanced_alpha) == 0: + cropped_enhanced_alpha = [empty_pil_tensor()] + + if len(cnet_pil_list) == 0: + cnet_pil_list = [empty_pil_tensor()] + + return enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list + + def doit(self, image, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, feather, noise_mask, force_inpaint, + bbox_threshold, bbox_dilation, bbox_crop_factor, + sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold, + sam_mask_hint_use_negative, drop_size, bbox_detector, wildcard, cycle=1, + sam_model_opt=None, segm_detector_opt=None, detailer_hook=None, inpaint_model=False, noise_mask_feather=0): + + result_img = None + result_mask = None + result_cropped_enhanced = [] + result_cropped_enhanced_alpha = [] + result_cnet_images = [] + + if len(image) > 1: + print(f"[Impact Pack] WARN: FaceDetailer is not a node designed for video detailing. If you intend to perform video detailing, please use Detailer For AnimateDiff.") + + for i, single_image in enumerate(image): + enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face( + single_image.unsqueeze(0), model, clip, vae, guide_size, guide_size_for, max_size, seed + i, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, feather, noise_mask, force_inpaint, + bbox_threshold, bbox_dilation, bbox_crop_factor, + sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold, + sam_mask_hint_use_negative, drop_size, bbox_detector, segm_detector_opt, sam_model_opt, wildcard, detailer_hook, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + result_img = torch.cat((result_img, enhanced_img), dim=0) if result_img is not None else enhanced_img + result_mask = torch.cat((result_mask, mask), dim=0) if result_mask is not None else mask + result_cropped_enhanced.extend(cropped_enhanced) + result_cropped_enhanced_alpha.extend(cropped_enhanced_alpha) + result_cnet_images.extend(cnet_pil_list) + + pipe = (model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, None, None, None, None) + return result_img, result_cropped_enhanced, result_cropped_enhanced_alpha, result_mask, pipe, result_cnet_images + + +class LatentPixelScale: + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "samples": ("LATENT", ), + "scale_method": (s.upscale_methods,), + "scale_factor": ("FLOAT", {"default": 1.5, "min": 0.1, "max": 10000, "step": 0.1}), + "vae": ("VAE", ), + "use_tiled_vae": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + }, + "optional": { + "upscale_model_opt": ("UPSCALE_MODEL", ), + } + } + + RETURN_TYPES = ("LATENT", "IMAGE") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, samples, scale_method, scale_factor, vae, use_tiled_vae, upscale_model_opt=None): + if upscale_model_opt is None: + latimg = core.latent_upscale_on_pixel_space2(samples, scale_method, scale_factor, vae, use_tile=use_tiled_vae) + else: + latimg = core.latent_upscale_on_pixel_space_with_model2(samples, scale_method, upscale_model_opt, scale_factor, vae, use_tile=use_tiled_vae) + return latimg + + +class NoiseInjectionDetailerHookProvider: + schedules = ["skip_start", "from_start"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "schedule_for_cycle": (s.schedules,), + "source": (["CPU", "GPU"],), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "start_strength": ("FLOAT", {"default": 2.0, "min": 0.0, "max": 200.0, "step": 0.01}), + "end_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 200.0, "step": 0.01}), + }, + } + + RETURN_TYPES = ("DETAILER_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, schedule_for_cycle, source, seed, start_strength, end_strength): + try: + hook = hooks.InjectNoiseHookForDetailer(source, seed, start_strength, end_strength, + from_start=('from_start' in schedule_for_cycle)) + return (hook, ) + except Exception as e: + print("[ERROR] NoiseInjectionDetailerHookProvider: 'ComfyUI Noise' custom node isn't installed. You must install 'BlenderNeko/ComfyUI Noise' extension to use this node.") + print(f"\t{e}") + pass + + +class UnsamplerDetailerHookProvider: + schedules = ["skip_start", "from_start"] + + @classmethod + def INPUT_TYPES(s): + return {"required": + {"model": ("MODEL",), + "steps": ("INT", {"default": 25, "min": 1, "max": 10000}), + "start_end_at_step": ("INT", {"default": 21, "min": 0, "max": 10000}), + "end_end_at_step": ("INT", {"default": 24, "min": 0, "max": 10000}), + "cfg": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "normalize": (["disable", "enable"], ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "schedule_for_cycle": (s.schedules,), + }} + + RETURN_TYPES = ("DETAILER_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, model, steps, start_end_at_step, end_end_at_step, cfg, sampler_name, + scheduler, normalize, positive, negative, schedule_for_cycle): + try: + hook = hooks.UnsamplerDetailerHook(model, steps, start_end_at_step, end_end_at_step, cfg, sampler_name, + scheduler, normalize, positive, negative, + from_start=('from_start' in schedule_for_cycle)) + + return (hook, ) + except Exception as e: + print("[ERROR] UnsamplerDetailerHookProvider: 'ComfyUI Noise' custom node isn't installed. You must install 'BlenderNeko/ComfyUI Noise' extension to use this node.") + print(f"\t{e}") + pass + + +class DenoiseSchedulerDetailerHookProvider: + schedules = ["simple"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "schedule_for_cycle": (s.schedules,), + "target_denoise": ("FLOAT", {"default": 0.3, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + } + + RETURN_TYPES = ("DETAILER_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, schedule_for_cycle, target_denoise): + hook = hooks.SimpleDetailerDenoiseSchedulerHook(target_denoise) + return (hook, ) + + +class CoreMLDetailerHookProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": {"mode": (["512x512", "768x768", "512x768", "768x512"], )}, } + + RETURN_TYPES = ("DETAILER_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, mode): + hook = hooks.CoreMLHook(mode) + return (hook, ) + + +class CfgScheduleHookProvider: + schedules = ["simple"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "schedule_for_iteration": (s.schedules,), + "target_cfg": ("FLOAT", {"default": 3.0, "min": 0.0, "max": 100.0}), + }, + } + + RETURN_TYPES = ("PK_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, schedule_for_iteration, target_cfg): + hook = None + if schedule_for_iteration == "simple": + hook = hooks.SimpleCfgScheduleHook(target_cfg) + + return (hook, ) + + +class UnsamplerHookProvider: + schedules = ["simple"] + + @classmethod + def INPUT_TYPES(s): + return {"required": + {"model": ("MODEL",), + "steps": ("INT", {"default": 25, "min": 1, "max": 10000}), + "start_end_at_step": ("INT", {"default": 21, "min": 0, "max": 10000}), + "end_end_at_step": ("INT", {"default": 24, "min": 0, "max": 10000}), + "cfg": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "normalize": (["disable", "enable"], ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "schedule_for_iteration": (s.schedules,), + }} + + RETURN_TYPES = ("PK_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, model, steps, start_end_at_step, end_end_at_step, cfg, sampler_name, + scheduler, normalize, positive, negative, schedule_for_iteration): + try: + hook = None + if schedule_for_iteration == "simple": + hook = hooks.UnsamplerHook(model, steps, start_end_at_step, end_end_at_step, cfg, sampler_name, + scheduler, normalize, positive, negative) + + return (hook, ) + except Exception as e: + print("[ERROR] UnsamplerHookProvider: 'ComfyUI Noise' custom node isn't installed. You must install 'BlenderNeko/ComfyUI Noise' extension to use this node.") + print(f"\t{e}") + pass + + +class NoiseInjectionHookProvider: + schedules = ["simple"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "schedule_for_iteration": (s.schedules,), + "source": (["CPU", "GPU"],), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "start_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 200.0, "step": 0.01}), + "end_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 200.0, "step": 0.01}), + }, + } + + RETURN_TYPES = ("PK_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, schedule_for_iteration, source, seed, start_strength, end_strength): + try: + hook = None + if schedule_for_iteration == "simple": + hook = hooks.InjectNoiseHook(source, seed, start_strength, end_strength) + + return (hook, ) + except Exception as e: + print("[ERROR] NoiseInjectionHookProvider: 'ComfyUI Noise' custom node isn't installed. You must install 'BlenderNeko/ComfyUI Noise' extension to use this node.") + print(f"\t{e}") + pass + + +class DenoiseScheduleHookProvider: + schedules = ["simple"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "schedule_for_iteration": (s.schedules,), + "target_denoise": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + } + + RETURN_TYPES = ("PK_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, schedule_for_iteration, target_denoise): + hook = None + if schedule_for_iteration == "simple": + hook = hooks.SimpleDenoiseScheduleHook(target_denoise) + + return (hook, ) + + +class StepsScheduleHookProvider: + schedules = ["simple"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "schedule_for_iteration": (s.schedules,), + "target_steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + }, + } + + RETURN_TYPES = ("PK_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, schedule_for_iteration, target_steps): + hook = None + if schedule_for_iteration == "simple": + hook = hooks.SimpleStepsScheduleHook(target_steps) + + return (hook, ) + + +class DetailerHookCombine: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "hook1": ("DETAILER_HOOK",), + "hook2": ("DETAILER_HOOK",), + }, + } + + RETURN_TYPES = ("DETAILER_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, hook1, hook2): + hook = hooks.DetailerHookCombine(hook1, hook2) + return (hook, ) + + +class PixelKSampleHookCombine: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "hook1": ("PK_HOOK",), + "hook2": ("PK_HOOK",), + }, + } + + RETURN_TYPES = ("PK_HOOK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, hook1, hook2): + hook = hooks.PixelKSampleHookCombine(hook1, hook2) + return (hook, ) + + +class PixelTiledKSampleUpscalerProvider: + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "scale_method": (s.upscale_methods,), + "model": ("MODEL",), + "vae": ("VAE",), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "tile_width": ("INT", {"default": 512, "min": 320, "max": MAX_RESOLUTION, "step": 64}), + "tile_height": ("INT", {"default": 512, "min": 320, "max": MAX_RESOLUTION, "step": 64}), + "tiling_strategy": (["random", "padded", 'simple'], ), + }, + "optional": { + "upscale_model_opt": ("UPSCALE_MODEL", ), + "pk_hook_opt": ("PK_HOOK", ), + } + } + + RETURN_TYPES = ("UPSCALER",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy, upscale_model_opt=None, pk_hook_opt=None): + if "BNK_TiledKSampler" in nodes.NODE_CLASS_MAPPINGS: + upscaler = core.PixelTiledKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy, upscale_model_opt, pk_hook_opt, tile_size=max(tile_width, tile_height)) + return (upscaler, ) + else: + print("[ERROR] PixelTiledKSampleUpscalerProvider: ComfyUI_TiledKSampler custom node isn't installed. You must install BlenderNeko/ComfyUI_TiledKSampler extension to use this node.") + + +class PixelTiledKSampleUpscalerProviderPipe: + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "scale_method": (s.upscale_methods,), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "tile_width": ("INT", {"default": 512, "min": 320, "max": MAX_RESOLUTION, "step": 64}), + "tile_height": ("INT", {"default": 512, "min": 320, "max": MAX_RESOLUTION, "step": 64}), + "tiling_strategy": (["random", "padded", 'simple'], ), + "basic_pipe": ("BASIC_PIPE",) + }, + "optional": { + "upscale_model_opt": ("UPSCALE_MODEL", ), + "pk_hook_opt": ("PK_HOOK", ), + } + } + + RETURN_TYPES = ("UPSCALER",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, scale_method, seed, steps, cfg, sampler_name, scheduler, denoise, tile_width, tile_height, tiling_strategy, basic_pipe, upscale_model_opt=None, pk_hook_opt=None): + if "BNK_TiledKSampler" in nodes.NODE_CLASS_MAPPINGS: + model, _, vae, positive, negative = basic_pipe + upscaler = core.PixelTiledKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, tile_width, tile_height, tiling_strategy, upscale_model_opt, pk_hook_opt, tile_size=max(tile_width, tile_height)) + return (upscaler, ) + else: + print("[ERROR] PixelTiledKSampleUpscalerProviderPipe: ComfyUI_TiledKSampler custom node isn't installed. You must install BlenderNeko/ComfyUI_TiledKSampler extension to use this node.") + + +class PixelKSampleUpscalerProvider: + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "scale_method": (s.upscale_methods,), + "model": ("MODEL",), + "vae": ("VAE",), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "use_tiled_vae": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "tile_size": ("INT", {"default": 512, "min": 320, "max": 4096, "step": 64}), + }, + "optional": { + "upscale_model_opt": ("UPSCALE_MODEL", ), + "pk_hook_opt": ("PK_HOOK", ), + } + } + + RETURN_TYPES = ("UPSCALER",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, + use_tiled_vae, upscale_model_opt=None, pk_hook_opt=None, tile_size=512): + upscaler = core.PixelKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, use_tiled_vae, upscale_model_opt, pk_hook_opt, + tile_size=tile_size) + return (upscaler, ) + + +class PixelKSampleUpscalerProviderPipe(PixelKSampleUpscalerProvider): + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "scale_method": (s.upscale_methods,), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "use_tiled_vae": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "basic_pipe": ("BASIC_PIPE",), + "tile_size": ("INT", {"default": 512, "min": 320, "max": 4096, "step": 64}), + }, + "optional": { + "upscale_model_opt": ("UPSCALE_MODEL", ), + "pk_hook_opt": ("PK_HOOK", ), + } + } + + RETURN_TYPES = ("UPSCALER",) + FUNCTION = "doit_pipe" + + CATEGORY = "ImpactPack/Upscale" + + def doit_pipe(self, scale_method, seed, steps, cfg, sampler_name, scheduler, denoise, + use_tiled_vae, basic_pipe, upscale_model_opt=None, pk_hook_opt=None, tile_size=512): + model, _, vae, positive, negative = basic_pipe + upscaler = core.PixelKSampleUpscaler(scale_method, model, vae, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, use_tiled_vae, upscale_model_opt, pk_hook_opt, + tile_size=tile_size) + return (upscaler, ) + + +class TwoSamplersForMaskUpscalerProvider: + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "scale_method": (s.upscale_methods,), + "full_sample_schedule": ( + ["none", "interleave1", "interleave2", "interleave3", + "last1", "last2", + "interleave1+last1", "interleave2+last1", "interleave3+last1", + ],), + "use_tiled_vae": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "base_sampler": ("KSAMPLER", ), + "mask_sampler": ("KSAMPLER", ), + "mask": ("MASK", ), + "vae": ("VAE",), + "tile_size": ("INT", {"default": 512, "min": 320, "max": 4096, "step": 64}), + }, + "optional": { + "full_sampler_opt": ("KSAMPLER",), + "upscale_model_opt": ("UPSCALE_MODEL", ), + "pk_hook_base_opt": ("PK_HOOK", ), + "pk_hook_mask_opt": ("PK_HOOK", ), + "pk_hook_full_opt": ("PK_HOOK", ), + } + } + + RETURN_TYPES = ("UPSCALER", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, scale_method, full_sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, vae, + full_sampler_opt=None, upscale_model_opt=None, + pk_hook_base_opt=None, pk_hook_mask_opt=None, pk_hook_full_opt=None, tile_size=512): + upscaler = core.TwoSamplersForMaskUpscaler(scale_method, full_sample_schedule, use_tiled_vae, + base_sampler, mask_sampler, mask, vae, full_sampler_opt, upscale_model_opt, + pk_hook_base_opt, pk_hook_mask_opt, pk_hook_full_opt, tile_size=tile_size) + return (upscaler, ) + + +class TwoSamplersForMaskUpscalerProviderPipe: + upscale_methods = ["nearest-exact", "bilinear", "lanczos", "area"] + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "scale_method": (s.upscale_methods,), + "full_sample_schedule": ( + ["none", "interleave1", "interleave2", "interleave3", + "last1", "last2", + "interleave1+last1", "interleave2+last1", "interleave3+last1", + ],), + "use_tiled_vae": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "base_sampler": ("KSAMPLER", ), + "mask_sampler": ("KSAMPLER", ), + "mask": ("MASK", ), + "basic_pipe": ("BASIC_PIPE",), + "tile_size": ("INT", {"default": 512, "min": 320, "max": 4096, "step": 64}), + }, + "optional": { + "full_sampler_opt": ("KSAMPLER",), + "upscale_model_opt": ("UPSCALE_MODEL", ), + "pk_hook_base_opt": ("PK_HOOK", ), + "pk_hook_mask_opt": ("PK_HOOK", ), + "pk_hook_full_opt": ("PK_HOOK", ), + } + } + + RETURN_TYPES = ("UPSCALER", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, scale_method, full_sample_schedule, use_tiled_vae, base_sampler, mask_sampler, mask, basic_pipe, + full_sampler_opt=None, upscale_model_opt=None, + pk_hook_base_opt=None, pk_hook_mask_opt=None, pk_hook_full_opt=None, tile_size=512): + + mask = make_2d_mask(mask) + + _, _, vae, _, _ = basic_pipe + upscaler = core.TwoSamplersForMaskUpscaler(scale_method, full_sample_schedule, use_tiled_vae, + base_sampler, mask_sampler, mask, vae, full_sampler_opt, upscale_model_opt, + pk_hook_base_opt, pk_hook_mask_opt, pk_hook_full_opt, tile_size=tile_size) + return (upscaler, ) + + +class IterativeLatentUpscale: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "samples": ("LATENT", ), + "upscale_factor": ("FLOAT", {"default": 1.5, "min": 1, "max": 10000, "step": 0.1}), + "steps": ("INT", {"default": 3, "min": 1, "max": 10000, "step": 1}), + "temp_prefix": ("STRING", {"default": ""}), + "upscaler": ("UPSCALER",) + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("LATENT", "VAE") + RETURN_NAMES = ("latent", "vae") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, samples, upscale_factor, steps, temp_prefix, upscaler, unique_id): + w = samples['samples'].shape[3]*8 # image width + h = samples['samples'].shape[2]*8 # image height + + if temp_prefix == "": + temp_prefix = None + + upscale_factor_unit = max(0, (upscale_factor-1.0)/steps) + current_latent = samples + scale = 1 + + for i in range(steps-1): + scale += upscale_factor_unit + new_w = w*scale + new_h = h*scale + core.update_node_status(unique_id, f"{i+1}/{steps} steps | x{scale:.2f}", (i+1)/steps) + print(f"IterativeLatentUpscale[{i+1}/{steps}]: {new_w:.1f}x{new_h:.1f} (scale:{scale:.2f}) ") + step_info = i, steps + current_latent = upscaler.upscale_shape(step_info, current_latent, new_w, new_h, temp_prefix) + + if scale < upscale_factor: + new_w = w*upscale_factor + new_h = h*upscale_factor + core.update_node_status(unique_id, f"Final step | x{upscale_factor:.2f}", 1.0) + print(f"IterativeLatentUpscale[Final]: {new_w:.1f}x{new_h:.1f} (scale:{upscale_factor:.2f}) ") + step_info = steps-1, steps + current_latent = upscaler.upscale_shape(step_info, current_latent, new_w, new_h, temp_prefix) + + core.update_node_status(unique_id, "", None) + + return (current_latent, upscaler.vae) + + +class IterativeImageUpscale: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "pixels": ("IMAGE", ), + "upscale_factor": ("FLOAT", {"default": 1.5, "min": 1, "max": 10000, "step": 0.1}), + "steps": ("INT", {"default": 3, "min": 1, "max": 10000, "step": 1}), + "temp_prefix": ("STRING", {"default": ""}), + "upscaler": ("UPSCALER",), + "vae": ("VAE",), + }, + "hidden": {"unique_id": "UNIQUE_ID"} + } + + RETURN_TYPES = ("IMAGE",) + RETURN_NAMES = ("image",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Upscale" + + def doit(self, pixels, upscale_factor, steps, temp_prefix, upscaler, vae, unique_id): + if temp_prefix == "": + temp_prefix = None + + core.update_node_status(unique_id, "VAEEncode (first)", 0) + if upscaler.is_tiled: + latent = nodes.VAEEncodeTiled().encode(vae, pixels, upscaler.tile_size)[0] + else: + latent = nodes.VAEEncode().encode(vae, pixels)[0] + + refined_latent = IterativeLatentUpscale().doit(latent, upscale_factor, steps, temp_prefix, upscaler, unique_id) + + core.update_node_status(unique_id, "VAEDecode (final)", 1.0) + if upscaler.is_tiled: + pixels = nodes.VAEDecodeTiled().decode(vae, refined_latent[0], upscaler.tile_size)[0] + else: + pixels = nodes.VAEDecode().decode(vae, refined_latent[0])[0] + + core.update_node_status(unique_id, "", None) + + return (pixels, ) + + +class FaceDetailerPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "detailer_pipe": ("DETAILER_PIPE",), + "guide_size": ("FLOAT", {"default": 384, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 1024, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "noise_mask": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "force_inpaint": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + + "bbox_threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "bbox_dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + "bbox_crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}), + + "sam_detection_hint": (["center-1", "horizontal-2", "vertical-2", "rect-4", "diamond-4", "mask-area", "mask-points", "mask-point-bbox", "none"],), + "sam_dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "sam_threshold": ("FLOAT", {"default": 0.93, "min": 0.0, "max": 1.0, "step": 0.01}), + "sam_bbox_expansion": ("INT", {"default": 0, "min": 0, "max": 1000, "step": 1}), + "sam_mask_hint_threshold": ("FLOAT", {"default": 0.7, "min": 0.0, "max": 1.0, "step": 0.01}), + "sam_mask_hint_use_negative": (["False", "Small", "Outter"],), + + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + "refiner_ratio": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0}), + + "cycle": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), + }, + "optional": { + "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "MASK", "DETAILER_PIPE", "IMAGE") + RETURN_NAMES = ("image", "cropped_refined", "cropped_enhanced_alpha", "mask", "detailer_pipe", "cnet_images") + OUTPUT_IS_LIST = (False, True, True, False, False, True) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Simple" + + def doit(self, image, detailer_pipe, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, feather, noise_mask, force_inpaint, bbox_threshold, bbox_dilation, bbox_crop_factor, + sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, + sam_mask_hint_threshold, sam_mask_hint_use_negative, drop_size, refiner_ratio=None, + cycle=1, inpaint_model=False, noise_mask_feather=0): + + result_img = None + result_mask = None + result_cropped_enhanced = [] + result_cropped_enhanced_alpha = [] + result_cnet_images = [] + + if len(image) > 1: + print(f"[Impact Pack] WARN: FaceDetailer is not a node designed for video detailing. If you intend to perform video detailing, please use Detailer For AnimateDiff.") + + model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector, sam_model_opt, detailer_hook, \ + refiner_model, refiner_clip, refiner_positive, refiner_negative = detailer_pipe + + for i, single_image in enumerate(image): + enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face( + single_image.unsqueeze(0), model, clip, vae, guide_size, guide_size_for, max_size, seed + i, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, feather, noise_mask, force_inpaint, + bbox_threshold, bbox_dilation, bbox_crop_factor, + sam_detection_hint, sam_dilation, sam_threshold, sam_bbox_expansion, sam_mask_hint_threshold, + sam_mask_hint_use_negative, drop_size, bbox_detector, segm_detector, sam_model_opt, wildcard, detailer_hook, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, refiner_negative=refiner_negative, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + result_img = torch.cat((result_img, enhanced_img), dim=0) if result_img is not None else enhanced_img + result_mask = torch.cat((result_mask, mask), dim=0) if result_mask is not None else mask + result_cropped_enhanced.extend(cropped_enhanced) + result_cropped_enhanced_alpha.extend(cropped_enhanced_alpha) + result_cnet_images.extend(cnet_pil_list) + + if len(result_cropped_enhanced) == 0: + result_cropped_enhanced = [empty_pil_tensor()] + + if len(result_cropped_enhanced_alpha) == 0: + result_cropped_enhanced_alpha = [empty_pil_tensor()] + + if len(result_cnet_images) == 0: + result_cnet_images = [empty_pil_tensor()] + + return result_img, result_cropped_enhanced, result_cropped_enhanced_alpha, result_mask, detailer_pipe, result_cnet_images + + +class MaskDetailerPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "mask": ("MASK", ), + "basic_pipe": ("BASIC_PIPE",), + + "guide_size": ("FLOAT", {"default": 384, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "mask bbox", "label_off": "crop region"}), + "max_size": ("FLOAT", {"default": 1024, "min": 64, "max": nodes.MAX_RESOLUTION, "step": 8}), + "mask_mode": ("BOOLEAN", {"default": True, "label_on": "masked only", "label_off": "whole"}), + + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + "refiner_ratio": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0}), + "batch_size": ("INT", {"default": 1, "min": 1, "max": 100}), + + "cycle": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), + }, + "optional": { + "refiner_basic_pipe_opt": ("BASIC_PIPE", ), + "detailer_hook": ("DETAILER_HOOK",), + "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "BASIC_PIPE", "BASIC_PIPE") + RETURN_NAMES = ("image", "cropped_refined", "cropped_enhanced_alpha", "basic_pipe", "refiner_basic_pipe_opt") + OUTPUT_IS_LIST = (False, True, True, False, False) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, image, mask, basic_pipe, guide_size, guide_size_for, max_size, mask_mode, + seed, steps, cfg, sampler_name, scheduler, denoise, + feather, crop_factor, drop_size, refiner_ratio, batch_size, cycle=1, + refiner_basic_pipe_opt=None, detailer_hook=None, inpaint_model=False, noise_mask_feather=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: MaskDetailer does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + model, clip, vae, positive, negative = basic_pipe + + if refiner_basic_pipe_opt is None: + refiner_model, refiner_clip, refiner_positive, refiner_negative = None, None, None, None + else: + refiner_model, refiner_clip, _, refiner_positive, refiner_negative = refiner_basic_pipe_opt + + # create segs + if mask is not None: + mask = make_2d_mask(mask) + segs = core.mask_to_segs(mask, False, crop_factor, False, drop_size) + else: + segs = ((image.shape[1], image.shape[2]), []) + + enhanced_img_batch = None + cropped_enhanced_list = [] + cropped_enhanced_alpha_list = [] + + for i in range(batch_size): + if mask is not None: + enhanced_img, _, cropped_enhanced, cropped_enhanced_alpha, _, _ = \ + DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed+i, steps, + cfg, sampler_name, scheduler, positive, negative, denoise, feather, mask_mode, + force_inpaint=True, wildcard_opt=None, detailer_hook=detailer_hook, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, refiner_clip=refiner_clip, + refiner_positive=refiner_positive, refiner_negative=refiner_negative, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + else: + enhanced_img, cropped_enhanced, cropped_enhanced_alpha = image, [], [] + + if enhanced_img_batch is None: + enhanced_img_batch = enhanced_img + else: + enhanced_img_batch = torch.cat((enhanced_img_batch, enhanced_img), dim=0) + + cropped_enhanced_list += cropped_enhanced + cropped_enhanced_alpha_list += cropped_enhanced_alpha + + # set fallback image + if len(cropped_enhanced_list) == 0: + cropped_enhanced_list = [empty_pil_tensor()] + + if len(cropped_enhanced_alpha_list) == 0: + cropped_enhanced_alpha_list = [empty_pil_tensor()] + + return enhanced_img_batch, cropped_enhanced_list, cropped_enhanced_alpha_list, basic_pipe, refiner_basic_pipe_opt + + +class DetailerForEachTest(DetailerForEach): + RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "IMAGE", "IMAGE") + RETURN_NAMES = ("image", "cropped", "cropped_refined", "cropped_refined_alpha", "cnet_images") + OUTPUT_IS_LIST = (False, True, True, True, True) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, + scheduler, positive, negative, denoise, feather, noise_mask, force_inpaint, wildcard, detailer_hook=None, + cycle=1, inpaint_model=False, noise_mask_feather=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: DetailerForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + enhanced_img, cropped, cropped_enhanced, cropped_enhanced_alpha, cnet_pil_list, new_segs = \ + DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, + cfg, sampler_name, scheduler, positive, negative, denoise, feather, noise_mask, + force_inpaint, wildcard, detailer_hook, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + # set fallback image + if len(cropped) == 0: + cropped = [empty_pil_tensor()] + + if len(cropped_enhanced) == 0: + cropped_enhanced = [empty_pil_tensor()] + + if len(cropped_enhanced_alpha) == 0: + cropped_enhanced_alpha = [empty_pil_tensor()] + + if len(cnet_pil_list) == 0: + cnet_pil_list = [empty_pil_tensor()] + + return enhanced_img, cropped, cropped_enhanced, cropped_enhanced_alpha, cnet_pil_list + + +class DetailerForEachTestPipe(DetailerForEachPipe): + RETURN_TYPES = ("IMAGE", "SEGS", "BASIC_PIPE", "IMAGE", "IMAGE", "IMAGE", "IMAGE", ) + RETURN_NAMES = ("image", "segs", "basic_pipe", "cropped", "cropped_refined", "cropped_refined_alpha", 'cnet_images') + OUTPUT_IS_LIST = (False, False, False, True, True, True, True) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + def doit(self, image, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, feather, noise_mask, force_inpaint, basic_pipe, wildcard, cycle=1, + refiner_ratio=None, detailer_hook=None, refiner_basic_pipe_opt=None, inpaint_model=False, noise_mask_feather=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: DetailerForEach does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + model, clip, vae, positive, negative = basic_pipe + + if refiner_basic_pipe_opt is None: + refiner_model, refiner_clip, refiner_positive, refiner_negative = None, None, None, None + else: + refiner_model, refiner_clip, _, refiner_positive, refiner_negative = refiner_basic_pipe_opt + + enhanced_img, cropped, cropped_enhanced, cropped_enhanced_alpha, cnet_pil_list, new_segs = \ + DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg, + sampler_name, scheduler, positive, negative, denoise, feather, noise_mask, + force_inpaint, wildcard, detailer_hook, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, + refiner_negative=refiner_negative, + cycle=cycle, inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + # set fallback image + if len(cropped) == 0: + cropped = [empty_pil_tensor()] + + if len(cropped_enhanced) == 0: + cropped_enhanced = [empty_pil_tensor()] + + if len(cropped_enhanced_alpha) == 0: + cropped_enhanced_alpha = [empty_pil_tensor()] + + if len(cnet_pil_list) == 0: + cnet_pil_list = [empty_pil_tensor()] + + return enhanced_img, new_segs, basic_pipe, cropped, cropped_enhanced, cropped_enhanced_alpha, cnet_pil_list + + +class SegsBitwiseAndMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS",), + "mask": ("MASK",), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, segs, mask): + return (core.segs_bitwise_and_mask(segs, mask), ) + + +class SegsBitwiseAndMaskForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS",), + "masks": ("MASK",), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, segs, masks): + return (core.apply_mask_to_each_seg(segs, masks), ) + + +class BitwiseAndMaskForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": + { + "base_segs": ("SEGS",), + "mask_segs": ("SEGS",), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, base_segs, mask_segs): + + result = [] + + for bseg in base_segs[1]: + cropped_mask1 = bseg.cropped_mask.copy() + crop_region1 = bseg.crop_region + + for mseg in mask_segs[1]: + cropped_mask2 = mseg.cropped_mask + crop_region2 = mseg.crop_region + + # compute the intersection of the two crop regions + intersect_region = (max(crop_region1[0], crop_region2[0]), + max(crop_region1[1], crop_region2[1]), + min(crop_region1[2], crop_region2[2]), + min(crop_region1[3], crop_region2[3])) + + overlapped = False + + # set all pixels in cropped_mask1 to 0 except for those that overlap with cropped_mask2 + for i in range(intersect_region[0], intersect_region[2]): + for j in range(intersect_region[1], intersect_region[3]): + if cropped_mask1[j - crop_region1[1], i - crop_region1[0]] == 1 and \ + cropped_mask2[j - crop_region2[1], i - crop_region2[0]] == 1: + # pixel overlaps with both masks, keep it as 1 + overlapped = True + pass + else: + # pixel does not overlap with both masks, set it to 0 + cropped_mask1[j - crop_region1[1], i - crop_region1[0]] = 0 + + if overlapped: + item = SEG(bseg.cropped_image, cropped_mask1, bseg.confidence, bseg.crop_region, bseg.bbox, bseg.label, None) + result.append(item) + + return ((base_segs[0], result),) + + +class SubtractMaskForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "base_segs": ("SEGS",), + "mask_segs": ("SEGS",), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, base_segs, mask_segs): + + result = [] + + for bseg in base_segs[1]: + cropped_mask1 = bseg.cropped_mask.copy() + crop_region1 = bseg.crop_region + + for mseg in mask_segs[1]: + cropped_mask2 = mseg.cropped_mask + crop_region2 = mseg.crop_region + + # compute the intersection of the two crop regions + intersect_region = (max(crop_region1[0], crop_region2[0]), + max(crop_region1[1], crop_region2[1]), + min(crop_region1[2], crop_region2[2]), + min(crop_region1[3], crop_region2[3])) + + changed = False + + # subtract operation + for i in range(intersect_region[0], intersect_region[2]): + for j in range(intersect_region[1], intersect_region[3]): + if cropped_mask1[j - crop_region1[1], i - crop_region1[0]] == 1 and \ + cropped_mask2[j - crop_region2[1], i - crop_region2[0]] == 1: + # pixel overlaps with both masks, set it as 0 + changed = True + cropped_mask1[j - crop_region1[1], i - crop_region1[0]] = 0 + else: + # pixel does not overlap with both masks, don't care + pass + + if changed: + item = SEG(bseg.cropped_image, cropped_mask1, bseg.confidence, bseg.crop_region, bseg.bbox, bseg.label, None) + result.append(item) + else: + result.append(base_segs) + + return ((base_segs[0], result),) + + +class ToBinaryMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK",), + "threshold": ("INT", {"default": 20, "min": 1, "max": 255}), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask, threshold): + mask = to_binary_mask(mask, threshold/255.0) + return (mask,) + + +class BitwiseAndMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask1": ("MASK",), + "mask2": ("MASK",), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask1, mask2): + mask = bitwise_and_masks(mask1, mask2) + return (mask,) + + +class SubtractMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask1": ("MASK", ), + "mask2": ("MASK", ), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask1, mask2): + mask = subtract_masks(mask1, mask2) + return (mask,) + + +class AddMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask1": ("MASK",), + "mask2": ("MASK",), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask1, mask2): + mask = add_masks(mask1, mask2) + return (mask,) + + +import nodes + + +def get_image_hash(arr): + split_index1 = arr.shape[0] // 2 + split_index2 = arr.shape[1] // 2 + part1 = arr[:split_index1, :split_index2] + part2 = arr[:split_index1, split_index2:] + part3 = arr[split_index1:, :split_index2] + part4 = arr[split_index1:, split_index2:] + + # 각 부분을 합산 + sum1 = np.sum(part1) + sum2 = np.sum(part2) + sum3 = np.sum(part3) + sum4 = np.sum(part4) + + return hash((sum1, sum2, sum3, sum4)) + + +def get_file_item(base_type, path): + path_type = base_type + + if path == "[output]": + path_type = "output" + path = path[:-9] + elif path == "[input]": + path_type = "input" + path = path[:-8] + elif path == "[temp]": + path_type = "temp" + path = path[:-7] + + subfolder = os.path.dirname(path) + filename = os.path.basename(path) + + return { + "filename": filename, + "subfolder": subfolder, + "type": path_type + } + + +class ImageReceiver: + @classmethod + def INPUT_TYPES(s): + input_dir = folder_paths.get_input_directory() + files = [f for f in os.listdir(input_dir) if os.path.isfile(os.path.join(input_dir, f))] + return {"required": { + "image": (sorted(files), ), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "save_to_workflow": ("BOOLEAN", {"default": False}), + "image_data": ("STRING", {"multiline": False}), + "trigger_always": ("BOOLEAN", {"default": False, "label_on": "enable", "label_off": "disable"}), + }, + } + + FUNCTION = "doit" + + RETURN_TYPES = ("IMAGE", "MASK") + + CATEGORY = "ImpactPack/Util" + + def doit(self, image, link_id, save_to_workflow, image_data, trigger_always): + if save_to_workflow: + try: + image_data = base64.b64decode(image_data.split(",")[1]) + i = Image.open(BytesIO(image_data)) + i = ImageOps.exif_transpose(i) + image = i.convert("RGB") + image = np.array(image).astype(np.float32) / 255.0 + image = torch.from_numpy(image)[None,] + if 'A' in i.getbands(): + mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0 + mask = 1. - torch.from_numpy(mask) + else: + mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu") + return (image, mask.unsqueeze(0)) + except Exception as e: + print(f"[WARN] ComfyUI-Impact-Pack: ImageReceiver - invalid 'image_data'") + mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu") + return (empty_pil_tensor(64, 64), mask, ) + else: + return nodes.LoadImage().load_image(image) + + @classmethod + def VALIDATE_INPUTS(s, image, link_id, save_to_workflow, image_data, trigger_always): + if image != '#DATA' and not folder_paths.exists_annotated_filepath(image) or image.startswith("/") or ".." in image: + return "Invalid image file: {}".format(image) + + return True + + @classmethod + def IS_CHANGED(s, image, link_id, save_to_workflow, image_data, trigger_always): + if trigger_always: + return float("NaN") + else: + if save_to_workflow: + return hash(image_data) + else: + return hash(image) + + +from server import PromptServer + +class ImageSender(nodes.PreviewImage): + @classmethod + def INPUT_TYPES(s): + return {"required": { + "images": ("IMAGE", ), + "filename_prefix": ("STRING", {"default": "ImgSender"}), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), }, + "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"}, + } + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, images, filename_prefix="ImgSender", link_id=0, prompt=None, extra_pnginfo=None): + result = nodes.PreviewImage().save_images(images, filename_prefix, prompt, extra_pnginfo) + PromptServer.instance.send_sync("img-send", {"link_id": link_id, "images": result['ui']['images']}) + return result + + +class LatentReceiver: + def __init__(self): + self.input_dir = folder_paths.get_input_directory() + self.type = "input" + + @classmethod + def INPUT_TYPES(s): + def check_file_extension(x): + return x.endswith(".latent") or x.endswith(".latent.png") + + input_dir = folder_paths.get_input_directory() + files = [f for f in os.listdir(input_dir) if os.path.isfile(os.path.join(input_dir, f)) and check_file_extension(f)] + return {"required": { + "latent": (sorted(files), ), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "trigger_always": ("BOOLEAN", {"default": False, "label_on": "enable", "label_off": "disable"}), + }, + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + RETURN_TYPES = ("LATENT",) + + @staticmethod + def load_preview_latent(image_path): + if not os.path.exists(image_path): + return None + + image = Image.open(image_path) + exif_data = piexif.load(image.info["exif"]) + + if piexif.ExifIFD.UserComment in exif_data["Exif"]: + compressed_data = exif_data["Exif"][piexif.ExifIFD.UserComment] + compressed_data_io = BytesIO(compressed_data) + with zipfile.ZipFile(compressed_data_io, mode='r') as archive: + tensor_bytes = archive.read("latent") + tensor = safetensors.torch.load(tensor_bytes) + return {"samples": tensor['latent_tensor']} + return None + + def parse_filename(self, filename): + pattern = r"^(.*)/(.*?)\[(.*)\]\s*$" + match = re.match(pattern, filename) + if match: + subfolder = match.group(1) + filename = match.group(2).rstrip() + file_type = match.group(3) + else: + subfolder = '' + file_type = self.type + + return {'filename': filename, 'subfolder': subfolder, 'type': file_type} + + def doit(self, **kwargs): + if 'latent' not in kwargs: + return (torch.zeros([1, 4, 8, 8]), ) + + latent = kwargs['latent'] + + latent_name = latent + latent_path = folder_paths.get_annotated_filepath(latent_name) + + if latent.endswith(".latent"): + latent = safetensors.torch.load_file(latent_path, device="cpu") + multiplier = 1.0 + if "latent_format_version_0" not in latent: + multiplier = 1.0 / 0.18215 + samples = {"samples": latent["latent_tensor"].float() * multiplier} + else: + samples = LatentReceiver.load_preview_latent(latent_path) + + if samples is None: + samples = {'samples': torch.zeros([1, 4, 8, 8])} + + preview = self.parse_filename(latent_name) + + return { + 'ui': {"images": [preview]}, + 'result': (samples, ) + } + + @classmethod + def IS_CHANGED(s, latent, link_id, trigger_always): + if trigger_always: + return float("NaN") + else: + image_path = folder_paths.get_annotated_filepath(latent) + m = hashlib.sha256() + with open(image_path, 'rb') as f: + m.update(f.read()) + return m.digest().hex() + + @classmethod + def VALIDATE_INPUTS(s, latent, link_id, trigger_always): + if not folder_paths.exists_annotated_filepath(latent) or latent.startswith("/") or ".." in latent: + return "Invalid latent file: {}".format(latent) + return True + + +class LatentSender(nodes.SaveLatent): + def __init__(self): + super().__init__() + self.output_dir = folder_paths.get_temp_directory() + self.type = "temp" + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "samples": ("LATENT", ), + "filename_prefix": ("STRING", {"default": "latents/LatentSender"}), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "preview_method": (["Latent2RGB-SDXL", "Latent2RGB-SD15", "TAESDXL", "TAESD15"],) + }, + "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"}, + } + + OUTPUT_NODE = True + + RETURN_TYPES = () + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + @staticmethod + def save_to_file(tensor_bytes, prompt, extra_pnginfo, image, image_path): + compressed_data = BytesIO() + with zipfile.ZipFile(compressed_data, mode='w') as archive: + archive.writestr("latent", tensor_bytes) + image = image.copy() + exif_data = {"Exif": {piexif.ExifIFD.UserComment: compressed_data.getvalue()}} + + metadata = PngInfo() + if prompt is not None: + metadata.add_text("prompt", json.dumps(prompt)) + if extra_pnginfo is not None: + for x in extra_pnginfo: + metadata.add_text(x, json.dumps(extra_pnginfo[x])) + + exif_bytes = piexif.dump(exif_data) + image.save(image_path, format='png', exif=exif_bytes, pnginfo=metadata, optimize=True) + + @staticmethod + def prepare_preview(latent_tensor, preview_method): + from comfy.cli_args import LatentPreviewMethod + import comfy.latent_formats as latent_formats + + lower_bound = 128 + upper_bound = 256 + + if preview_method == "Latent2RGB-SD15": + latent_format = latent_formats.SD15() + method = LatentPreviewMethod.Latent2RGB + elif preview_method == "TAESD15": + latent_format = latent_formats.SD15() + method = LatentPreviewMethod.TAESD + elif preview_method == "TAESDXL": + latent_format = latent_formats.SDXL() + method = LatentPreviewMethod.TAESD + else: # preview_method == "Latent2RGB-SDXL" + latent_format = latent_formats.SDXL() + method = LatentPreviewMethod.Latent2RGB + + previewer = core.get_previewer("cpu", latent_format=latent_format, force=True, method=method) + + image = previewer.decode_latent_to_preview(latent_tensor) + min_size = min(image.size[0], image.size[1]) + max_size = max(image.size[0], image.size[1]) + + scale_factor = 1 + if max_size > upper_bound: + scale_factor = upper_bound/max_size + + # prevent too small preview + if min_size*scale_factor < lower_bound: + scale_factor = lower_bound/min_size + + w = int(image.size[0] * scale_factor) + h = int(image.size[1] * scale_factor) + + image = image.resize((w, h), resample=Image.NEAREST) + + return LatentSender.attach_format_text(image) + + @staticmethod + def attach_format_text(image): + width_a, height_a = image.size + + letter_image = Image.open(latent_letter_path) + width_b, height_b = letter_image.size + + new_width = max(width_a, width_b) + new_height = height_a + height_b + + new_image = Image.new('RGB', (new_width, new_height), (0, 0, 0)) + + offset_x = (new_width - width_b) // 2 + offset_y = (height_a + (new_height - height_a - height_b) // 2) + new_image.paste(letter_image, (offset_x, offset_y)) + + new_image.paste(image, (0, 0)) + + return new_image + + def doit(self, samples, filename_prefix="latents/LatentSender", link_id=0, preview_method="Latent2RGB-SDXL", prompt=None, extra_pnginfo=None): + full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, self.output_dir) + + # load preview + preview = LatentSender.prepare_preview(samples['samples'], preview_method) + + # support save metadata for latent sharing + file = f"{filename}_{counter:05}_.latent.png" + fullpath = os.path.join(full_output_folder, file) + + output = {"latent_tensor": samples["samples"]} + + tensor_bytes = safetensors.torch.save(output) + LatentSender.save_to_file(tensor_bytes, prompt, extra_pnginfo, preview, fullpath) + + latent_path = { + 'filename': file, + 'subfolder': subfolder, + 'type': self.type + } + + PromptServer.instance.send_sync("latent-send", {"link_id": link_id, "images": [latent_path]}) + + return {'ui': {'images': [latent_path]}} + + +class ImpactWildcardProcessor: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "wildcard_text": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "populated_text": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "mode": ("BOOLEAN", {"default": True, "label_on": "Populate", "label_off": "Fixed"}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "Select to add Wildcard": (["Select the Wildcard to add to the text"],), + }, + } + + CATEGORY = "ImpactPack/Prompt" + + RETURN_TYPES = ("STRING", ) + FUNCTION = "doit" + + @staticmethod + def process(**kwargs): + return impact.wildcards.process(**kwargs) + + def doit(self, *args, **kwargs): + populated_text = kwargs['populated_text'] + return (populated_text, ) + + +class ImpactWildcardEncode: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "model": ("MODEL",), + "clip": ("CLIP",), + "wildcard_text": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "populated_text": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "mode": ("BOOLEAN", {"default": True, "label_on": "Populate", "label_off": "Fixed"}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"), ), + "Select to add Wildcard": (["Select the Wildcard to add to the text"], ), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + }, + } + + CATEGORY = "ImpactPack/Prompt" + + RETURN_TYPES = ("MODEL", "CLIP", "CONDITIONING", "STRING") + RETURN_NAMES = ("model", "clip", "conditioning", "populated_text") + FUNCTION = "doit" + + @staticmethod + def process_with_loras(**kwargs): + return impact.wildcards.process_with_loras(**kwargs) + + @staticmethod + def get_wildcard_list(): + return impact.wildcards.get_wildcard_list() + + def doit(self, *args, **kwargs): + populated = kwargs['populated_text'] + model, clip, conditioning = impact.wildcards.process_with_loras(populated, kwargs['model'], kwargs['clip']) + return (model, clip, conditioning, populated) + + + + diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_sampling.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_sampling.py new file mode 100644 index 0000000000000000000000000000000000000000..8ff6f521c7cbb5f008bb59c7c0cf672aeab97a4e --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_sampling.py @@ -0,0 +1,232 @@ +import nodes +from comfy.k_diffusion import sampling as k_diffusion_sampling +from comfy import samplers +from comfy_extras import nodes_custom_sampler + +import torch +import math + + +def calculate_sigmas(model, sampler, scheduler, steps): + discard_penultimate_sigma = False + if sampler in ['dpm_2', 'dpm_2_ancestral', 'uni_pc', 'uni_pc_bh2']: + steps += 1 + discard_penultimate_sigma = True + + sigmas = samplers.calculate_sigmas_scheduler(model.model, scheduler, steps) + + if discard_penultimate_sigma: + sigmas = torch.cat([sigmas[:-2], sigmas[-1:]]) + return sigmas + + +def get_noise_sampler(x, cpu, total_sigmas, **kwargs): + if 'extra_args' in kwargs and 'seed' in kwargs['extra_args']: + sigma_min, sigma_max = total_sigmas[total_sigmas > 0].min(), total_sigmas.max() + seed = kwargs['extra_args'].get("seed", None) + return k_diffusion_sampling.BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed, cpu=cpu) + return None + + +def ksampler(sampler_name, total_sigmas, extra_options={}, inpaint_options={}): + if sampler_name == "dpmpp_sde": + def sample_dpmpp_sde(model, x, sigmas, **kwargs): + noise_sampler = get_noise_sampler(x, True, total_sigmas, **kwargs) + if noise_sampler is not None: + kwargs['noise_sampler'] = noise_sampler + + return k_diffusion_sampling.sample_dpmpp_sde(model, x, sigmas, **kwargs) + + sampler_function = sample_dpmpp_sde + + elif sampler_name == "dpmpp_sde_gpu": + def sample_dpmpp_sde(model, x, sigmas, **kwargs): + noise_sampler = get_noise_sampler(x, False, total_sigmas, **kwargs) + if noise_sampler is not None: + kwargs['noise_sampler'] = noise_sampler + + return k_diffusion_sampling.sample_dpmpp_sde_gpu(model, x, sigmas, **kwargs) + + sampler_function = sample_dpmpp_sde + + elif sampler_name == "dpmpp_2m_sde": + def sample_dpmpp_sde(model, x, sigmas, **kwargs): + noise_sampler = get_noise_sampler(x, True, total_sigmas, **kwargs) + if noise_sampler is not None: + kwargs['noise_sampler'] = noise_sampler + + return k_diffusion_sampling.sample_dpmpp_2m_sde(model, x, sigmas, **kwargs) + + sampler_function = sample_dpmpp_sde + + elif sampler_name == "dpmpp_2m_sde_gpu": + def sample_dpmpp_sde(model, x, sigmas, **kwargs): + noise_sampler = get_noise_sampler(x, False, total_sigmas, **kwargs) + if noise_sampler is not None: + kwargs['noise_sampler'] = noise_sampler + + return k_diffusion_sampling.sample_dpmpp_2m_sde_gpu(model, x, sigmas, **kwargs) + + sampler_function = sample_dpmpp_sde + + elif sampler_name == "dpmpp_3m_sde": + def sample_dpmpp_sde(model, x, sigmas, **kwargs): + noise_sampler = get_noise_sampler(x, True, total_sigmas, **kwargs) + if noise_sampler is not None: + kwargs['noise_sampler'] = noise_sampler + + return k_diffusion_sampling.sample_dpmpp_2m_sde(model, x, sigmas, **kwargs) + + sampler_function = sample_dpmpp_sde + + elif sampler_name == "dpmpp_3m_sde_gpu": + def sample_dpmpp_sde(model, x, sigmas, **kwargs): + noise_sampler = get_noise_sampler(x, False, total_sigmas, **kwargs) + if noise_sampler is not None: + kwargs['noise_sampler'] = noise_sampler + + return k_diffusion_sampling.sample_dpmpp_2m_sde_gpu(model, x, sigmas, **kwargs) + + sampler_function = sample_dpmpp_sde + else: + return samplers.ksampler(sampler_name, extra_options, inpaint_options) + + return samplers.KSAMPLER(sampler_function, extra_options, inpaint_options) + + +def separated_sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler, positive, negative, + latent_image, start_at_step, end_at_step, return_with_leftover_noise, sigma_ratio=1.0, sampler_opt=None): + if sampler_opt is None: + total_sigmas = calculate_sigmas(model, sampler_name, scheduler, steps) + else: + total_sigmas = calculate_sigmas(model, "", scheduler, steps) + + sigmas = total_sigmas[start_at_step:end_at_step+1] * sigma_ratio + if sampler_opt is None: + impact_sampler = ksampler(sampler_name, total_sigmas) + else: + impact_sampler = sampler_opt + + if len(sigmas) == 0 or (len(sigmas) == 1 and sigmas[0] == 0): + return latent_image + + res = nodes_custom_sampler.SamplerCustom().sample(model, add_noise, seed, cfg, positive, negative, impact_sampler, sigmas, latent_image) + + if return_with_leftover_noise: + return res[0] + else: + return res[1] + + +def ksampler_wrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise, + refiner_ratio=None, refiner_model=None, refiner_clip=None, refiner_positive=None, refiner_negative=None, sigma_factor=1.0): + + if refiner_ratio is None or refiner_model is None or refiner_clip is None or refiner_positive is None or refiner_negative is None: + refined_latent = nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise * sigma_factor)[0] + else: + advanced_steps = math.floor(steps / denoise) + start_at_step = advanced_steps - steps + end_at_step = start_at_step + math.floor(steps * (1.0 - refiner_ratio)) + + # print(f"pre: {start_at_step} .. {end_at_step} / {advanced_steps}") + temp_latent = separated_sample(model, True, seed, advanced_steps, cfg, sampler_name, scheduler, + positive, negative, latent_image, start_at_step, end_at_step, True, sigma_ratio=sigma_factor) + + if 'noise_mask' in latent_image: + # noise_latent = \ + # impact_sampling.separated_sample(refiner_model, "enable", seed, advanced_steps, cfg, sampler_name, + # scheduler, refiner_positive, refiner_negative, latent_image, end_at_step, + # end_at_step, "enable") + + latent_compositor = nodes.NODE_CLASS_MAPPINGS['LatentCompositeMasked']() + temp_latent = latent_compositor.composite(latent_image, temp_latent, 0, 0, False, latent_image['noise_mask'])[0] + + # print(f"post: {end_at_step} .. {advanced_steps + 1} / {advanced_steps}") + refined_latent = separated_sample(refiner_model, False, seed, advanced_steps, cfg, sampler_name, scheduler, + refiner_positive, refiner_negative, temp_latent, end_at_step, advanced_steps + 1, False, sigma_ratio=sigma_factor) + + return refined_latent + + +class KSamplerAdvancedWrapper: + params = None + + def __init__(self, model, cfg, sampler_name, scheduler, positive, negative, sampler_opt=None, sigma_factor=1.0): + self.params = model, cfg, sampler_name, scheduler, positive, negative, sigma_factor + self.sampler_opt = sampler_opt + + def clone_with_conditionings(self, positive, negative): + model, cfg, sampler_name, scheduler, _, _, _ = self.params + return KSamplerAdvancedWrapper(model, cfg, sampler_name, scheduler, positive, negative, self.sampler_opt) + + def sample_advanced(self, add_noise, seed, steps, latent_image, start_at_step, end_at_step, return_with_leftover_noise, hook=None, + recovery_mode="ratio additional", recovery_sampler="AUTO", recovery_sigma_ratio=1.0): + + model, cfg, sampler_name, scheduler, positive, negative, sigma_factor = self.params + # steps, start_at_step, end_at_step = self.compensate_denoise(steps, start_at_step, end_at_step) + + if hook is not None: + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent = hook.pre_ksample_advanced(model, add_noise, seed, steps, cfg, sampler_name, scheduler, + positive, negative, latent_image, start_at_step, end_at_step, + return_with_leftover_noise) + + if recovery_mode != 'DISABLE' and sampler_name in ['uni_pc', 'uni_pc_bh2', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu']: + base_image = latent_image.copy() + if recovery_mode == "ratio between": + sigma_ratio = 1.0 - recovery_sigma_ratio + else: + sigma_ratio = 1.0 + else: + base_image = None + sigma_ratio = 1.0 + + try: + if sigma_ratio > 0: + latent_image = separated_sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler, + positive, negative, latent_image, start_at_step, end_at_step, + return_with_leftover_noise, sigma_ratio=sigma_ratio * sigma_factor, sampler_opt=self.sampler_opt) + except ValueError as e: + if str(e) == 'sigma_min and sigma_max must not be 0': + print(f"\nWARN: sampling skipped - sigma_min and sigma_max are 0") + return latent_image + + if (recovery_sigma_ratio > 0 and recovery_mode != 'DISABLE' and + sampler_name in ['uni_pc', 'uni_pc_bh2', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu']): + compensate = 0 if sampler_name in ['uni_pc', 'uni_pc_bh2', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu'] else 2 + if recovery_sampler == "AUTO": + recovery_sampler = 'dpm_fast' if sampler_name in ['uni_pc', 'uni_pc_bh2', 'dpmpp_sde', 'dpmpp_sde_gpu'] else 'dpmpp_2m' + + latent_compositor = nodes.NODE_CLASS_MAPPINGS['LatentCompositeMasked']() + + noise_mask = latent_image['noise_mask'] + + if len(noise_mask.shape) == 4: + noise_mask = noise_mask.squeeze(0).squeeze(0) + + latent_image = latent_compositor.composite(base_image, latent_image, 0, 0, False, noise_mask)[0] + + try: + latent_image = separated_sample(model, add_noise, seed, steps, cfg, recovery_sampler, scheduler, + positive, negative, latent_image, start_at_step-compensate, end_at_step, + return_with_leftover_noise, sigma_ratio=recovery_sigma_ratio * sigma_factor, sampler_opt=self.sampler_opt) + except ValueError as e: + if str(e) == 'sigma_min and sigma_max must not be 0': + print(f"\nWARN: sampling skipped - sigma_min and sigma_max are 0") + + return latent_image + + +class KSamplerWrapper: + params = None + + def __init__(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise): + self.params = model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise + + def sample(self, latent_image, hook=None): + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise = self.params + + if hook is not None: + model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise = \ + hook.pre_ksample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise) + + return nodes.common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)[0] diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_server.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_server.py new file mode 100644 index 0000000000000000000000000000000000000000..350fc95bee451316134e8a479a9cec5e3f5c616c --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_server.py @@ -0,0 +1,556 @@ +import os +import threading +import traceback + +from aiohttp import web + +import impact +import server +import folder_paths + +import torchvision + +import impact.core as core +import impact.impact_pack as impact_pack +from impact.utils import to_tensor +from segment_anything import SamPredictor, sam_model_registry +import numpy as np +import nodes +from PIL import Image +import io +import impact.wildcards as wildcards +import comfy +from io import BytesIO +import random + + +@server.PromptServer.instance.routes.post("/upload/temp") +async def upload_image(request): + upload_dir = folder_paths.get_temp_directory() + + if not os.path.exists(upload_dir): + os.makedirs(upload_dir) + + post = await request.post() + image = post.get("image") + + if image and image.file: + filename = image.filename + if not filename: + return web.Response(status=400) + + split = os.path.splitext(filename) + i = 1 + while os.path.exists(os.path.join(upload_dir, filename)): + filename = f"{split[0]} ({i}){split[1]}" + i += 1 + + filepath = os.path.join(upload_dir, filename) + + with open(filepath, "wb") as f: + f.write(image.file.read()) + + return web.json_response({"name": filename}) + else: + return web.Response(status=400) + + +sam_predictor = None +default_sam_model_name = os.path.join(impact_pack.model_path, "sams", "sam_vit_b_01ec64.pth") + +sam_lock = threading.Condition() + +last_prepare_data = None + + +def async_prepare_sam(image_dir, model_name, filename): + with sam_lock: + global sam_predictor + + if 'vit_h' in model_name: + model_kind = 'vit_h' + elif 'vit_l' in model_name: + model_kind = 'vit_l' + else: + model_kind = 'vit_b' + + sam_model = sam_model_registry[model_kind](checkpoint=model_name) + sam_predictor = SamPredictor(sam_model) + + image_path = os.path.join(image_dir, filename) + image = nodes.LoadImage().load_image(image_path)[0] + image = np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8) + + if impact.config.get_config()['sam_editor_cpu']: + device = 'cpu' + else: + device = comfy.model_management.get_torch_device() + + sam_predictor.model.to(device=device) + sam_predictor.set_image(image, "RGB") + sam_predictor.model.cpu() + + +@server.PromptServer.instance.routes.post("/sam/prepare") +async def sam_prepare(request): + global sam_predictor + global last_prepare_data + data = await request.json() + + with sam_lock: + if last_prepare_data is not None and last_prepare_data == data: + # already loaded: skip -- prevent redundant loading + return web.Response(status=200) + + last_prepare_data = data + + model_name = 'sam_vit_b_01ec64.pth' + if data['sam_model_name'] == 'auto': + model_name = impact.config.get_config()['sam_editor_model'] + + model_name = os.path.join(impact_pack.model_path, "sams", model_name) + + print(f"[INFO] ComfyUI-Impact-Pack: Loading SAM model '{impact_pack.model_path}'") + + filename, image_dir = folder_paths.annotated_filepath(data["filename"]) + + if image_dir is None: + typ = data['type'] if data['type'] != '' else 'output' + image_dir = folder_paths.get_directory_by_type(typ) + if data['subfolder'] is not None and data['subfolder'] != '': + image_dir += f"/{data['subfolder']}" + + if image_dir is None: + return web.Response(status=400) + + thread = threading.Thread(target=async_prepare_sam, args=(image_dir, model_name, filename,)) + thread.start() + + print(f"[INFO] ComfyUI-Impact-Pack: SAM model loaded. ") + + +@server.PromptServer.instance.routes.post("/sam/release") +async def release_sam(request): + global sam_predictor + + with sam_lock: + del sam_predictor + sam_predictor = None + + print(f"[INFO] ComfyUI-Impact-Pack: unloading SAM model") + + +@server.PromptServer.instance.routes.post("/sam/detect") +async def sam_detect(request): + global sam_predictor + with sam_lock: + if sam_predictor is not None: + if impact.config.get_config()['sam_editor_cpu']: + device = 'cpu' + else: + device = comfy.model_management.get_torch_device() + + sam_predictor.model.to(device=device) + try: + data = await request.json() + + positive_points = data['positive_points'] + negative_points = data['negative_points'] + threshold = data['threshold'] + + points = [] + plabs = [] + + for p in positive_points: + points.append(p) + plabs.append(1) + + for p in negative_points: + points.append(p) + plabs.append(0) + + detected_masks = core.sam_predict(sam_predictor, points, plabs, None, threshold) + mask = core.combine_masks2(detected_masks) + + if mask is None: + return web.Response(status=400) + + image = mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])).movedim(1, -1).expand(-1, -1, -1, 3) + i = 255. * image.cpu().numpy() + + img = Image.fromarray(np.clip(i[0], 0, 255).astype(np.uint8)) + + img_buffer = io.BytesIO() + img.save(img_buffer, format='png') + + headers = {'Content-Type': 'image/png'} + finally: + sam_predictor.model.to(device="cpu") + + return web.Response(body=img_buffer.getvalue(), headers=headers) + + else: + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/impact/wildcards/list") +async def wildcards_list(request): + data = {'data': impact.wildcards.get_wildcard_list()} + return web.json_response(data) + + +@server.PromptServer.instance.routes.post("/impact/wildcards") +async def populate_wildcards(request): + data = await request.json() + populated = wildcards.process(data['text'], data.get('seed', None)) + return web.json_response({"text": populated}) + + +segs_picker_map = {} + +@server.PromptServer.instance.routes.get("/impact/segs/picker/count") +async def segs_picker_count(request): + node_id = request.rel_url.query.get('id', '') + + if node_id in segs_picker_map: + res = len(segs_picker_map[node_id]) + return web.Response(status=200, text=str(res)) + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/impact/segs/picker/view") +async def segs_picker(request): + node_id = request.rel_url.query.get('id', '') + idx = int(request.rel_url.query.get('idx', '')) + + if node_id in segs_picker_map and idx < len(segs_picker_map[node_id]): + img = to_tensor(segs_picker_map[node_id][idx]).permute(0, 3, 1, 2).squeeze(0) + pil = torchvision.transforms.ToPILImage('RGB')(img) + + image_bytes = BytesIO() + pil.save(image_bytes, format="PNG") + image_bytes.seek(0) + return web.Response(status=200, body=image_bytes, content_type='image/png', headers={"Content-Disposition": f"filename={node_id}{idx}.png"}) + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/view/validate") +async def view_validate(request): + if "filename" in request.rel_url.query: + filename = request.rel_url.query["filename"] + subfolder = request.rel_url.query["subfolder"] + filename, base_dir = folder_paths.annotated_filepath(filename) + + if filename == '' or filename[0] == '/' or '..' in filename: + return web.Response(status=400) + + if base_dir is None: + base_dir = folder_paths.get_input_directory() + + file = os.path.join(base_dir, subfolder, filename) + + if os.path.isfile(file): + return web.Response(status=200) + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/impact/validate/pb_id_image") +async def view_validate(request): + if "id" in request.rel_url.query: + pb_id = request.rel_url.query["id"] + + if pb_id not in core.preview_bridge_image_id_map: + return web.Response(status=400) + + file = core.preview_bridge_image_id_map[pb_id] + if os.path.isfile(file): + return web.Response(status=200) + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/impact/set/pb_id_image") +async def set_previewbridge_image(request): + try: + if "filename" in request.rel_url.query: + node_id = request.rel_url.query["node_id"] + filename = request.rel_url.query["filename"] + path_type = request.rel_url.query["type"] + subfolder = request.rel_url.query["subfolder"] + filename, output_dir = folder_paths.annotated_filepath(filename) + + if filename == '' or filename[0] == '/' or '..' in filename: + return web.Response(status=400) + + if output_dir is None: + if path_type == 'input': + output_dir = folder_paths.get_input_directory() + elif path_type == 'output': + output_dir = folder_paths.get_output_directory() + else: + output_dir = folder_paths.get_temp_directory() + + file = os.path.join(output_dir, subfolder, filename) + item = { + 'filename': filename, + 'type': path_type, + 'subfolder': subfolder, + } + pb_id = core.set_previewbridge_image(node_id, file, item) + + return web.Response(status=200, text=pb_id) + except Exception: + traceback.print_exc() + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/impact/get/pb_id_image") +async def get_previewbridge_image(request): + if "id" in request.rel_url.query: + pb_id = request.rel_url.query["id"] + + if pb_id in core.preview_bridge_image_id_map: + _, path_item = core.preview_bridge_image_id_map[pb_id] + return web.json_response(path_item) + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/impact/view/pb_id_image") +async def view_previewbridge_image(request): + if "id" in request.rel_url.query: + pb_id = request.rel_url.query["id"] + + if pb_id in core.preview_bridge_image_id_map: + file = core.preview_bridge_image_id_map[pb_id] + + with Image.open(file) as img: + filename = os.path.basename(file) + return web.FileResponse(file, headers={"Content-Disposition": f"filename=\"{filename}\""}) + + return web.Response(status=400) + + +def onprompt_for_switch(json_data): + inversed_switch_info = {} + onprompt_switch_info = {} + onprompt_cond_branch_info = {} + + for k, v in json_data['prompt'].items(): + if 'class_type' not in v: + continue + + cls = v['class_type'] + if cls == 'ImpactInversedSwitch': + select_input = v['inputs']['select'] + if isinstance(select_input, list) and len(select_input) == 2: + input_node = json_data['prompt'][select_input[0]] + if input_node['class_type'] == 'ImpactInt' and 'inputs' in input_node and 'value' in input_node['inputs']: + inversed_switch_info[k] = input_node['inputs']['value'] + else: + inversed_switch_info[k] = select_input + + elif cls in ['ImpactSwitch', 'LatentSwitch', 'SEGSSwitch', 'ImpactMakeImageList']: + if 'sel_mode' in v['inputs'] and v['inputs']['sel_mode'] and 'select' in v['inputs']: + select_input = v['inputs']['select'] + if isinstance(select_input, list) and len(select_input) == 2: + input_node = json_data['prompt'][select_input[0]] + if input_node['class_type'] == 'ImpactInt' and 'inputs' in input_node and 'value' in input_node['inputs']: + onprompt_switch_info[k] = input_node['inputs']['value'] + if input_node['class_type'] == 'ImpactSwitch' and 'inputs' in input_node and 'select' in input_node['inputs']: + if isinstance(input_node['inputs']['select'], int): + onprompt_switch_info[k] = input_node['inputs']['select'] + else: + print(f"\n##### ##### #####\n[WARN] {cls}: For the 'select' operation, only 'select_index' of the 'ImpactSwitch', which is not an input, or 'ImpactInt' and 'Primitive' are allowed as inputs.\n##### ##### #####\n") + else: + onprompt_switch_info[k] = select_input + + elif cls == 'ImpactConditionalBranchSelMode': + if 'sel_mode' in v['inputs'] and v['inputs']['sel_mode'] and 'cond' in v['inputs']: + cond_input = v['inputs']['cond'] + if isinstance(cond_input, list) and len(cond_input) == 2: + input_node = json_data['prompt'][cond_input[0]] + if (input_node['class_type'] == 'ImpactValueReceiver' and 'inputs' in input_node + and 'value' in input_node['inputs'] and 'typ' in input_node['inputs']): + if 'BOOLEAN' == input_node['inputs']['typ']: + try: + onprompt_cond_branch_info[k] = input_node['inputs']['value'].lower() == "true" + except: + pass + else: + onprompt_cond_branch_info[k] = cond_input + + for k, v in json_data['prompt'].items(): + disable_targets = set() + + for kk, vv in v['inputs'].items(): + if isinstance(vv, list) and len(vv) == 2: + if vv[0] in inversed_switch_info: + if vv[1] + 1 != inversed_switch_info[vv[0]]: + disable_targets.add(kk) + + if k in onprompt_switch_info: + selected_slot_name = f"input{onprompt_switch_info[k]}" + for kk, vv in v['inputs'].items(): + if kk != selected_slot_name and kk.startswith('input'): + disable_targets.add(kk) + + if k in onprompt_cond_branch_info: + selected_slot_name = "tt_value" if onprompt_cond_branch_info[k] else "ff_value" + for kk, vv in v['inputs'].items(): + if kk in ['tt_value', 'ff_value'] and kk != selected_slot_name: + disable_targets.add(kk) + + for kk in disable_targets: + del v['inputs'][kk] + +def onprompt_for_pickers(json_data): + detected_pickers = set() + + for k, v in json_data['prompt'].items(): + if 'class_type' not in v: + continue + + cls = v['class_type'] + if cls == 'ImpactSEGSPicker': + detected_pickers.add(k) + + # garbage collection + keys_to_remove = [key for key in segs_picker_map if key not in detected_pickers] + for key in keys_to_remove: + del segs_picker_map[key] + + +def gc_preview_bridge_cache(json_data): + prompt_keys = json_data['prompt'].keys() + + for key in list(core.preview_bridge_cache.keys()): + if key not in prompt_keys: + print(f"key deleted: {key}") + del core.preview_bridge_cache[key] + + +def workflow_imagereceiver_update(json_data): + prompt = json_data['prompt'] + + for v in prompt.values(): + if 'class_type' in v and v['class_type'] == 'ImageReceiver': + if v['inputs']['save_to_workflow']: + v['inputs']['image'] = "#DATA" + + +def regional_sampler_seed_update(json_data): + prompt = json_data['prompt'] + + for k, v in prompt.items(): + if 'class_type' in v and v['class_type'] == 'RegionalSampler': + seed_2nd_mode = v['inputs']['seed_2nd_mode'] + + new_seed = None + if seed_2nd_mode == 'increment': + new_seed = v['inputs']['seed_2nd']+1 + if new_seed > 1125899906842624: + new_seed = 0 + elif seed_2nd_mode == 'decrement': + new_seed = v['inputs']['seed_2nd']-1 + if new_seed < 0: + new_seed = 1125899906842624 + elif seed_2nd_mode == 'randomize': + new_seed = random.randint(0, 1125899906842624) + + if new_seed is not None: + server.PromptServer.instance.send_sync("impact-node-feedback", {"node_id": k, "widget_name": "seed_2nd", "type": "INT", "value": new_seed}) + + +def onprompt_populate_wildcards(json_data): + prompt = json_data['prompt'] + + updated_widget_values = {} + for k, v in prompt.items(): + if 'class_type' in v and (v['class_type'] == 'ImpactWildcardEncode' or v['class_type'] == 'ImpactWildcardProcessor'): + inputs = v['inputs'] + if inputs['mode'] and isinstance(inputs['populated_text'], str): + if isinstance(inputs['seed'], list): + try: + input_node = prompt[inputs['seed'][0]] + if input_node['class_type'] == 'ImpactInt': + input_seed = int(input_node['inputs']['value']) + if not isinstance(input_seed, int): + continue + if input_node['class_type'] == 'Seed (rgthree)': + input_seed = int(input_node['inputs']['seed']) + if not isinstance(input_seed, int): + continue + else: + print(f"[Impact Pack] Only `ImpactInt`, `Seed (rgthree)` and `Primitive` Node are allowed as the seed for '{v['class_type']}'. It will be ignored. ") + continue + except: + continue + else: + input_seed = int(inputs['seed']) + + inputs['populated_text'] = wildcards.process(inputs['wildcard_text'], input_seed) + inputs['mode'] = False + + server.PromptServer.instance.send_sync("impact-node-feedback", {"node_id": k, "widget_name": "populated_text", "type": "STRING", "value": inputs['populated_text']}) + updated_widget_values[k] = inputs['populated_text'] + + if 'extra_data' in json_data and 'extra_pnginfo' in json_data['extra_data']: + for node in json_data['extra_data']['extra_pnginfo']['workflow']['nodes']: + key = str(node['id']) + if key in updated_widget_values: + node['widgets_values'][1] = updated_widget_values[key] + node['widgets_values'][2] = False + + +def onprompt_for_remote(json_data): + prompt = json_data['prompt'] + + for v in prompt.values(): + if 'class_type' in v: + cls = v['class_type'] + if cls == 'ImpactRemoteBoolean' or cls == 'ImpactRemoteInt': + inputs = v['inputs'] + node_id = str(inputs['node_id']) + + if node_id not in prompt: + continue + + target_inputs = prompt[node_id]['inputs'] + + widget_name = inputs['widget_name'] + if widget_name in target_inputs: + widget_type = None + if cls == 'ImpactRemoteBoolean' and isinstance(target_inputs[widget_name], bool): + widget_type = 'BOOLEAN' + + elif cls == 'ImpactRemoteInt' and (isinstance(target_inputs[widget_name], int) or isinstance(target_inputs[widget_name], float)): + widget_type = 'INT' + + if widget_type is None: + break + + target_inputs[widget_name] = inputs['value'] + server.PromptServer.instance.send_sync("impact-node-feedback", {"node_id": node_id, "widget_name": widget_name, "type": widget_type, "value": inputs['value']}) + + +def onprompt(json_data): + try: + onprompt_for_remote(json_data) # NOTE: top priority + onprompt_for_switch(json_data) + onprompt_for_pickers(json_data) + onprompt_populate_wildcards(json_data) + gc_preview_bridge_cache(json_data) + workflow_imagereceiver_update(json_data) + regional_sampler_seed_update(json_data) + except Exception as e: + print(f"[WARN] ComfyUI-Impact-Pack: Error on prompt - several features will not work.\n{e}") + + return json_data + + +server.PromptServer.instance.add_on_prompt_handler(onprompt) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/legacy_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/legacy_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..61709ce09d5410d4e75722d2df0094c1a5e5fe93 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/legacy_nodes.py @@ -0,0 +1,273 @@ +import folder_paths + +import impact.mmdet_nodes as mmdet_nodes +from impact.utils import * +from impact.core import SEG +import impact.core as core +import nodes + +class NO_BBOX_MODEL: + pass + + +class NO_SEGM_MODEL: + pass + + +class MMDetLoader: + @classmethod + def INPUT_TYPES(s): + bboxs = ["bbox/"+x for x in folder_paths.get_filename_list("mmdets_bbox")] + segms = ["segm/"+x for x in folder_paths.get_filename_list("mmdets_segm")] + return {"required": {"model_name": (bboxs + segms, )}} + RETURN_TYPES = ("BBOX_MODEL", "SEGM_MODEL") + FUNCTION = "load_mmdet" + + CATEGORY = "ImpactPack/Legacy" + + def load_mmdet(self, model_name): + mmdet_path = folder_paths.get_full_path("mmdets", model_name) + model = mmdet_nodes.load_mmdet(mmdet_path) + + if model_name.startswith("bbox"): + return model, NO_SEGM_MODEL() + else: + return NO_BBOX_MODEL(), model + + +class BboxDetectorForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "bbox_model": ("BBOX_MODEL", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + } + } + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Legacy" + + @staticmethod + def detect(bbox_model, image, threshold, dilation, crop_factor, drop_size=1, detailer_hook=None): + mmdet_results = mmdet_nodes.inference_bbox(bbox_model, image, threshold) + segmasks = core.create_segmasks(mmdet_results) + + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + items = [] + h = image.shape[1] + w = image.shape[2] + for x in segmasks: + item_bbox = x[0] + item_mask = x[1] + + y1, x1, y2, x2 = item_bbox + + if x2 - x1 > drop_size and y2 - y1 > drop_size: + crop_region = make_crop_region(w, h, item_bbox, crop_factor) + cropped_image = crop_image(image, crop_region) + cropped_mask = crop_ndarray2(item_mask, crop_region) + confidence = x[2] + # bbox_size = (item_bbox[2]-item_bbox[0],item_bbox[3]-item_bbox[1]) # (w,h) + + item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox, None, None) + items.append(item) + + shape = h, w + return shape, items + + def doit(self, bbox_model, image, threshold, dilation, crop_factor): + return (BboxDetectorForEach.detect(bbox_model, image, threshold, dilation, crop_factor), ) + + +class SegmDetectorCombined: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segm_model": ("SEGM_MODEL", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Legacy" + + def doit(self, segm_model, image, threshold, dilation): + mmdet_results = mmdet_nodes.inference_segm(image, segm_model, threshold) + segmasks = core.create_segmasks(mmdet_results) + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + mask = combine_masks(segmasks) + return (mask,) + + +class BboxDetectorCombined(SegmDetectorCombined): + @classmethod + def INPUT_TYPES(s): + return {"required": { + "bbox_model": ("BBOX_MODEL", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 4, "min": 0, "max": 255, "step": 1}), + } + } + + def doit(self, bbox_model, image, threshold, dilation): + mmdet_results = mmdet_nodes.inference_bbox(bbox_model, image, threshold) + segmasks = core.create_segmasks(mmdet_results) + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + mask = combine_masks(segmasks) + return (mask,) + + +class SegmDetectorForEach: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segm_model": ("SEGM_MODEL", ), + "image": ("IMAGE", ), + "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), + "dilation": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + } + } + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Legacy" + + def doit(self, segm_model, image, threshold, dilation, crop_factor): + mmdet_results = mmdet_nodes.inference_segm(image, segm_model, threshold) + segmasks = core.create_segmasks(mmdet_results) + + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + items = [] + h = image.shape[1] + w = image.shape[2] + for x in segmasks: + item_bbox = x[0] + item_mask = x[1] + + crop_region = make_crop_region(w, h, item_bbox, crop_factor) + cropped_image = crop_image(image, crop_region) + cropped_mask = crop_ndarray2(item_mask, crop_region) + confidence = x[2] + + item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox, None, None) + items.append(item) + + shape = h,w + return ((shape, items), ) + + +class SegsMaskCombine: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "image": ("IMAGE", ), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Legacy" + + @staticmethod + def combine(segs, image): + h = image.shape[1] + w = image.shape[2] + + mask = np.zeros((h, w), dtype=np.uint8) + + for seg in segs[1]: + cropped_mask = seg.cropped_mask + crop_region = seg.crop_region + mask[crop_region[1]:crop_region[3], crop_region[0]:crop_region[2]] |= (cropped_mask * 255).astype(np.uint8) + + return torch.from_numpy(mask.astype(np.float32) / 255.0) + + def doit(self, segs, image): + return (SegsMaskCombine.combine(segs, image), ) + + +class MaskPainter(nodes.PreviewImage): + @classmethod + def INPUT_TYPES(s): + return {"required": {"images": ("IMAGE",), }, + "hidden": { + "prompt": "PROMPT", + "extra_pnginfo": "EXTRA_PNGINFO", + }, + "optional": {"mask_image": ("IMAGE_PATH",), }, + "optional": {"image": (["#placeholder"], )}, + } + + RETURN_TYPES = ("MASK",) + + FUNCTION = "save_painted_images" + + CATEGORY = "ImpactPack/Legacy" + + def save_painted_images(self, images, filename_prefix="impact-mask", + prompt=None, extra_pnginfo=None, mask_image=None, image=None): + if image == "#placeholder" or image['image_hash'] != id(images): + # new input image + res = self.save_images(images, filename_prefix, prompt, extra_pnginfo) + + item = res['ui']['images'][0] + + if not item['filename'].endswith(']'): + filepath = f"{item['filename']} [{item['type']}]" + else: + filepath = item['filename'] + + _, mask = nodes.LoadImage().load_image(filepath) + + res['ui']['aux'] = [id(images), res['ui']['images']] + res['result'] = (mask, ) + + return res + + else: + # new mask + if '0' in image: # fallback + image = image['0'] + + forward = {'filename': image['forward_filename'], + 'subfolder': image['forward_subfolder'], + 'type': image['forward_type'], } + + res = {'ui': {'images': [forward]}} + + imgpath = "" + if 'subfolder' in image and image['subfolder'] != "": + imgpath = image['subfolder'] + "/" + + imgpath += f"{image['filename']}" + + if 'type' in image and image['type'] != "": + imgpath += f" [{image['type']}]" + + res['ui']['aux'] = [id(images), [forward]] + _, mask = nodes.LoadImage().load_image(imgpath) + res['result'] = (mask, ) + + return res diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/logics.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/logics.py new file mode 100644 index 0000000000000000000000000000000000000000..ee692e44b90efde05dbd77492c2ad04ec3bb3971 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/logics.py @@ -0,0 +1,703 @@ +import sys +import time + +import execution +import folder_paths +import impact.impact_server +from server import PromptServer +from impact.utils import any_typ +import impact.core as core +import re + + +class ImpactCompare: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "cmp": (['a = b', 'a <> b', 'a > b', 'a < b', 'a >= b', 'a <= b', 'tt', 'ff'],), + "a": (any_typ, ), + "b": (any_typ, ), + }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = ("BOOLEAN", ) + + def doit(self, cmp, a, b): + if cmp == "a = b": + return (a == b, ) + elif cmp == "a <> b": + return (a != b, ) + elif cmp == "a > b": + return (a > b, ) + elif cmp == "a < b": + return (a < b, ) + elif cmp == "a >= b": + return (a >= b, ) + elif cmp == "a <= b": + return (a <= b, ) + elif cmp == 'tt': + return (True, ) + else: + return (False, ) + + +class ImpactNotEmptySEGS: + @classmethod + def INPUT_TYPES(cls): + return {"required": {"segs": ("SEGS",)}} + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = ("BOOLEAN", ) + + def doit(self, segs): + return (segs[1] != [], ) + + +class ImpactConditionalBranch: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "cond": ("BOOLEAN",), + "tt_value": (any_typ,), + "ff_value": (any_typ,), + }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = (any_typ, ) + + def doit(self, cond, tt_value, ff_value): + if cond: + return (tt_value,) + else: + return (ff_value,) + + +class ImpactConditionalBranchSelMode: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "cond": ("BOOLEAN",), + "sel_mode": ("BOOLEAN", {"default": True, "label_on": "select_on_prompt", "label_off": "select_on_execution"}), + }, + "optional": { + "tt_value": (any_typ,), + "ff_value": (any_typ,), + }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = (any_typ, ) + + def doit(self, cond, sel_mode, tt_value=None, ff_value=None): + print(f'tt={tt_value is None}\nff={ff_value is None}') + if cond: + return (tt_value,) + else: + return (ff_value,) + + +class ImpactConvertDataType: + def __init__(self): + pass + + @classmethod + def INPUT_TYPES(cls): + return {"required": {"value": (any_typ,)}} + + RETURN_TYPES = ("STRING", "FLOAT", "INT", "BOOLEAN") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic" + + @staticmethod + def is_number(string): + pattern = re.compile(r'^[-+]?[0-9]*\.?[0-9]+$') + return bool(pattern.match(string)) + + def doit(self, value): + if self.is_number(str(value)): + num = value + else: + if str.lower(str(value)) != "false": + num = 1 + else: + num = 0 + return (str(value), float(num), int(float(num)), bool(float(num)), ) + + +class ImpactIfNone: + def __init__(self): + pass + + @classmethod + def INPUT_TYPES(cls): + return { + "required": {}, + "optional": {"signal": (any_typ,), "any_input": (any_typ,), } + } + + RETURN_TYPES = (any_typ, "BOOLEAN", ) + RETURN_NAMES = ("signal_opt", "bool") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic" + + def doit(self, signal=None, any_input=None): + if any_input is None: + return (signal, False, ) + else: + return (signal, True, ) + + +class ImpactLogicalOperators: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "operator": (['and', 'or', 'xor'],), + "bool_a": ("BOOLEAN", {"forceInput": True}), + "bool_b": ("BOOLEAN", {"forceInput": True}), + }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = ("BOOLEAN", ) + + def doit(self, operator, bool_a, bool_b): + if operator == "and": + return (bool_a and bool_b, ) + elif operator == "or": + return (bool_a or bool_b, ) + else: + return (bool_a != bool_b, ) + + +class ImpactConditionalStopIteration: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { "cond": ("BOOLEAN", {"forceInput": True}), }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = () + + OUTPUT_NODE = True + + def doit(self, cond): + if cond: + PromptServer.instance.send_sync("stop-iteration", {}) + return {} + + +class ImpactNeg: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { "value": ("BOOLEAN", {"forceInput": True}), }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = ("BOOLEAN", ) + + def doit(self, value): + return (not value, ) + + +class ImpactInt: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "value": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = ("INT", ) + + def doit(self, value): + return (value, ) + + +class ImpactFloat: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "value": ("FLOAT", {"default": 1.0, "min": -3.402823466e+38, "max": 3.402823466e+38}), + }, + } + + FUNCTION = "doit" + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = ("FLOAT", ) + + def doit(self, value): + return (value, ) + + +class ImpactValueSender: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "value": (any_typ, ), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + }, + "optional": { + "signal_opt": (any_typ,), + } + } + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = (any_typ, ) + RETURN_NAMES = ("signal", ) + + def doit(self, value, link_id=0, signal_opt=None): + PromptServer.instance.send_sync("value-send", {"link_id": link_id, "value": value}) + return (signal_opt, ) + + +class ImpactIntConstSender: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "signal": (any_typ, ), + "value": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = () + + def doit(self, signal, value, link_id=0): + PromptServer.instance.send_sync("value-send", {"link_id": link_id, "value": value}) + return {} + + +class ImpactValueReceiver: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "typ": (["STRING", "INT", "FLOAT", "BOOLEAN"], ), + "value": ("STRING", {"default": ""}), + "link_id": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic" + + RETURN_TYPES = (any_typ, ) + + def doit(self, typ, value, link_id=0): + if typ == "INT": + return (int(value), ) + elif typ == "FLOAT": + return (float(value), ) + elif typ == "BOOLEAN": + return (value.lower() == "true", ) + else: + return (value, ) + + +class ImpactImageInfo: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "value": ("IMAGE", ), + }, + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + + RETURN_TYPES = ("INT", "INT", "INT", "INT") + RETURN_NAMES = ("batch", "height", "width", "channel") + + def doit(self, value): + return (value.shape[0], value.shape[1], value.shape[2], value.shape[3]) + + +class ImpactLatentInfo: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "value": ("LATENT", ), + }, + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + + RETURN_TYPES = ("INT", "INT", "INT", "INT") + RETURN_NAMES = ("batch", "height", "width", "channel") + + def doit(self, value): + shape = value['samples'].shape + return (shape[0], shape[2] * 8, shape[3] * 8, shape[1]) + + +class ImpactMinMax: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "mode": ("BOOLEAN", {"default": True, "label_on": "max", "label_off": "min"}), + "a": (any_typ,), + "b": (any_typ,), + }, + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + + RETURN_TYPES = ("INT", ) + + def doit(self, mode, a, b): + if mode: + return (max(a, b), ) + else: + return (min(a, b),) + + +class ImpactQueueTrigger: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "signal": (any_typ,), + "mode": ("BOOLEAN", {"default": True, "label_on": "Trigger", "label_off": "Don't trigger"}), + } + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = (any_typ,) + RETURN_NAMES = ("signal_opt",) + OUTPUT_NODE = True + + def doit(self, signal, mode): + if(mode): + PromptServer.instance.send_sync("impact-add-queue", {}) + + return (signal,) + + +class ImpactQueueTriggerCountdown: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "signal": (any_typ,), + "count": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "total": ("INT", {"default": 10, "min": 1, "max": 0xffffffffffffffff}), + "mode": ("BOOLEAN", {"default": True, "label_on": "Trigger", "label_off": "Don't trigger"}), + }, + "hidden": {"unique_id": "UNIQUE_ID"} + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = (any_typ, "INT", "INT") + RETURN_NAMES = ("signal_opt", "count", "total") + OUTPUT_NODE = True + + def doit(self, signal, count, total, mode, unique_id): + if count < total - 1 and (mode): + PromptServer.instance.send_sync("impact-node-feedback", + {"node_id": unique_id, "widget_name": "count", "type": "int", "value": count+1}) + PromptServer.instance.send_sync("impact-add-queue", {}) + if count >= total - 1: + PromptServer.instance.send_sync("impact-node-feedback", + {"node_id": unique_id, "widget_name": "count", "type": "int", "value": 0}) + + return (signal, count, total) + + + +class ImpactSetWidgetValue: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "signal": (any_typ,), + "node_id": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "widget_name": ("STRING", {"multiline": False}), + }, + "optional": { + "boolean_value": ("BOOLEAN", {"forceInput": True}), + "int_value": ("INT", {"forceInput": True}), + "float_value": ("FLOAT", {"forceInput": True}), + "string_value": ("STRING", {"forceInput": True}), + } + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = (any_typ,) + RETURN_NAMES = ("signal_opt",) + OUTPUT_NODE = True + + def doit(self, signal, node_id, widget_name, boolean_value=None, int_value=None, float_value=None, string_value=None, ): + kind = None + if boolean_value is not None: + value = boolean_value + kind = "BOOLEAN" + elif int_value is not None: + value = int_value + kind = "INT" + elif float_value is not None: + value = float_value + kind = "FLOAT" + elif string_value is not None: + value = string_value + kind = "STRING" + else: + value = None + + if value is not None: + PromptServer.instance.send_sync("impact-node-feedback", + {"node_id": node_id, "widget_name": widget_name, "type": kind, "value": value}) + + return (signal,) + + +class ImpactNodeSetMuteState: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "signal": (any_typ,), + "node_id": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "set_state": ("BOOLEAN", {"default": True, "label_on": "active", "label_off": "mute"}), + } + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = (any_typ,) + RETURN_NAMES = ("signal_opt",) + OUTPUT_NODE = True + + def doit(self, signal, node_id, set_state): + PromptServer.instance.send_sync("impact-node-mute-state", {"node_id": node_id, "is_active": set_state}) + return (signal,) + + +class ImpactSleep: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "signal": (any_typ,), + "seconds": ("FLOAT", {"default": 0.5, "min": 0, "max": 3600}), + } + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = (any_typ,) + RETURN_NAMES = ("signal_opt",) + OUTPUT_NODE = True + + def doit(self, signal, seconds): + time.sleep(seconds) + return (signal,) + + +error_skip_flag = False +try: + import cm_global + def filter_message(str): + global error_skip_flag + + if "IMPACT-PACK-SIGNAL: STOP CONTROL BRIDGE" in str: + return True + elif error_skip_flag and "ERROR:root:!!! Exception during processing !!!\n" == str: + error_skip_flag = False + return True + else: + return False + + cm_global.try_call(api='cm.register_message_collapse', f=filter_message) + +except Exception as e: + print(f"[WARN] ComfyUI-Impact-Pack: `ComfyUI` or `ComfyUI-Manager` is an outdated version.") + pass + + +def workflow_to_map(workflow): + nodes = {} + links = {} + for link in workflow['links']: + links[link[0]] = link[1:] + for node in workflow['nodes']: + nodes[str(node['id'])] = node + + return nodes, links + + +class ImpactRemoteBoolean: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "node_id": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "widget_name": ("STRING", {"multiline": False}), + "value": ("BOOLEAN", {"default": True, "label_on": "True", "label_off": "False"}), + }} + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = () + OUTPUT_NODE = True + + def doit(self, **kwargs): + return {} + + +class ImpactRemoteInt: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "node_id": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "widget_name": ("STRING", {"multiline": False}), + "value": ("INT", {"default": 0, "min": -0xffffffffffffffff, "max": 0xffffffffffffffff}), + }} + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = () + OUTPUT_NODE = True + + def doit(self, **kwargs): + return {} + +class ImpactControlBridge: + @classmethod + def INPUT_TYPES(cls): + return {"required": { + "value": (any_typ,), + "mode": ("BOOLEAN", {"default": True, "label_on": "Active", "label_off": "Mute/Bypass"}), + "behavior": ("BOOLEAN", {"default": True, "label_on": "Mute", "label_off": "Bypass"}), + }, + "hidden": {"unique_id": "UNIQUE_ID", "prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"} + } + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Logic/_for_test" + RETURN_TYPES = (any_typ,) + RETURN_NAMES = ("value",) + OUTPUT_NODE = True + + @classmethod + def IS_CHANGED(self, value, mode, behavior=True, unique_id=None, prompt=None, extra_pnginfo=None): + nodes, links = workflow_to_map(extra_pnginfo['workflow']) + + next_nodes = [] + + for link in nodes[unique_id]['outputs'][0]['links']: + node_id = str(links[link][2]) + impact.utils.collect_non_reroute_nodes(nodes, links, next_nodes, node_id) + + return next_nodes + + + def doit(self, value, mode, behavior=True, unique_id=None, prompt=None, extra_pnginfo=None): + global error_skip_flag + + nodes, links = workflow_to_map(extra_pnginfo['workflow']) + + active_nodes = [] + mute_nodes = [] + bypass_nodes = [] + + for link in nodes[unique_id]['outputs'][0]['links']: + node_id = str(links[link][2]) + + next_nodes = [] + impact.utils.collect_non_reroute_nodes(nodes, links, next_nodes, node_id) + + for next_node_id in next_nodes: + node_mode = nodes[next_node_id]['mode'] + + if node_mode == 0: + active_nodes.append(next_node_id) + elif node_mode == 2: + mute_nodes.append(next_node_id) + elif node_mode == 4: + bypass_nodes.append(next_node_id) + + if mode: + # active + should_be_active_nodes = mute_nodes + bypass_nodes + if len(should_be_active_nodes) > 0: + PromptServer.instance.send_sync("impact-bridge-continue", {"node_id": unique_id, 'actives': list(should_be_active_nodes)}) + error_skip_flag = True + raise Exception("IMPACT-PACK-SIGNAL: STOP CONTROL BRIDGE\nIf you see this message, your ComfyUI-Manager is outdated. Please update it.") + + elif behavior: + # mute + should_be_mute_nodes = active_nodes + bypass_nodes + if len(should_be_mute_nodes) > 0: + PromptServer.instance.send_sync("impact-bridge-continue", {"node_id": unique_id, 'mutes': list(should_be_mute_nodes)}) + error_skip_flag = True + raise Exception("IMPACT-PACK-SIGNAL: STOP CONTROL BRIDGE\nIf you see this message, your ComfyUI-Manager is outdated. Please update it.") + + else: + # bypass + should_be_bypass_nodes = active_nodes + mute_nodes + if len(should_be_bypass_nodes) > 0: + PromptServer.instance.send_sync("impact-bridge-continue", {"node_id": unique_id, 'bypasses': list(should_be_bypass_nodes)}) + error_skip_flag = True + raise Exception("IMPACT-PACK-SIGNAL: STOP CONTROL BRIDGE\nIf you see this message, your ComfyUI-Manager is outdated. Please update it.") + + return (value, ) + + +original_handle_execution = execution.PromptExecutor.handle_execution_error + + +def handle_execution_error(**kwargs): + print(f" handled") + execution.PromptExecutor.handle_execution_error(**kwargs) + diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/mmdet_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/mmdet_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..5adb147bc1f5fa30112402d2c5917939c4d83cd3 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/mmdet_nodes.py @@ -0,0 +1,219 @@ +import folder_paths +from impact.core import * +import os + +import mmcv +from mmdet.apis import (inference_detector, init_detector) +from mmdet.evaluation import get_classes + + +def load_mmdet(model_path): + model_config = os.path.splitext(model_path)[0] + ".py" + model = init_detector(model_config, model_path, device="cpu") + return model + + +def inference_segm_old(model, image, conf_threshold): + image = image.numpy()[0] * 255 + mmdet_results = inference_detector(model, image) + + bbox_results, segm_results = mmdet_results + label = "A" + + classes = get_classes("coco") + labels = [ + np.full(bbox.shape[0], i, dtype=np.int32) + for i, bbox in enumerate(bbox_results) + ] + n, m = bbox_results[0].shape + if n == 0: + return [[], [], []] + labels = np.concatenate(labels) + bboxes = np.vstack(bbox_results) + segms = mmcv.concat_list(segm_results) + filter_idxs = np.where(bboxes[:, -1] > conf_threshold)[0] + results = [[], [], []] + for i in filter_idxs: + results[0].append(label + "-" + classes[labels[i]]) + results[1].append(bboxes[i]) + results[2].append(segms[i]) + + return results + + +def inference_segm(image, modelname, conf_thres, lab="A"): + image = image.numpy()[0] * 255 + mmdet_results = inference_detector(modelname, image).pred_instances + bboxes = mmdet_results.bboxes.numpy() + segms = mmdet_results.masks.numpy() + scores = mmdet_results.scores.numpy() + + classes = get_classes("coco") + + n, m = bboxes.shape + if n == 0: + return [[], [], [], []] + labels = mmdet_results.labels + filter_inds = np.where(mmdet_results.scores > conf_thres)[0] + results = [[], [], [], []] + for i in filter_inds: + results[0].append(lab + "-" + classes[labels[i]]) + results[1].append(bboxes[i]) + results[2].append(segms[i]) + results[3].append(scores[i]) + + return results + + +def inference_bbox(modelname, image, conf_threshold): + image = image.numpy()[0] * 255 + label = "A" + output = inference_detector(modelname, image).pred_instances + cv2_image = np.array(image) + cv2_image = cv2_image[:, :, ::-1].copy() + cv2_gray = cv2.cvtColor(cv2_image, cv2.COLOR_BGR2GRAY) + + segms = [] + for x0, y0, x1, y1 in output.bboxes: + cv2_mask = np.zeros(cv2_gray.shape, np.uint8) + cv2.rectangle(cv2_mask, (int(x0), int(y0)), (int(x1), int(y1)), 255, -1) + cv2_mask_bool = cv2_mask.astype(bool) + segms.append(cv2_mask_bool) + + n, m = output.bboxes.shape + if n == 0: + return [[], [], [], []] + + bboxes = output.bboxes.numpy() + scores = output.scores.numpy() + filter_idxs = np.where(scores > conf_threshold)[0] + results = [[], [], [], []] + for i in filter_idxs: + results[0].append(label) + results[1].append(bboxes[i]) + results[2].append(segms[i]) + results[3].append(scores[i]) + + return results + + +class BBoxDetector: + bbox_model = None + + def __init__(self, bbox_model): + self.bbox_model = bbox_model + + def detect(self, image, threshold, dilation, crop_factor, drop_size=1, detailer_hook=None): + drop_size = max(drop_size, 1) + mmdet_results = inference_bbox(self.bbox_model, image, threshold) + segmasks = create_segmasks(mmdet_results) + + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + items = [] + h = image.shape[1] + w = image.shape[2] + + for x in segmasks: + item_bbox = x[0] + item_mask = x[1] + + y1, x1, y2, x2 = item_bbox + + if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue + crop_region = make_crop_region(w, h, item_bbox, crop_factor) + cropped_image = crop_image(image, crop_region) + cropped_mask = crop_ndarray2(item_mask, crop_region) + confidence = x[2] + # bbox_size = (item_bbox[2]-item_bbox[0],item_bbox[3]-item_bbox[1]) # (w,h) + + item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox, None, None) + + items.append(item) + + shape = image.shape[1], image.shape[2] + return shape, items + + def detect_combined(self, image, threshold, dilation): + mmdet_results = inference_bbox(self.bbox_model, image, threshold) + segmasks = create_segmasks(mmdet_results) + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + return combine_masks(segmasks) + + def setAux(self, x): + pass + + +class SegmDetector(BBoxDetector): + segm_model = None + + def __init__(self, segm_model): + self.segm_model = segm_model + + def detect(self, image, threshold, dilation, crop_factor, drop_size=1, detailer_hook=None): + drop_size = max(drop_size, 1) + mmdet_results = inference_segm(image, self.segm_model, threshold) + segmasks = create_segmasks(mmdet_results) + + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + items = [] + h = image.shape[1] + w = image.shape[2] + for x in segmasks: + item_bbox = x[0] + item_mask = x[1] + + y1, x1, y2, x2 = item_bbox + + if x2 - x1 > drop_size and y2 - y1 > drop_size: # minimum dimension must be (2,2) to avoid squeeze issue + crop_region = make_crop_region(w, h, item_bbox, crop_factor) + cropped_image = crop_image(image, crop_region) + cropped_mask = crop_ndarray2(item_mask, crop_region) + confidence = x[2] + + item = SEG(cropped_image, cropped_mask, confidence, crop_region, item_bbox, None, None) + items.append(item) + + segs = image.shape, items + + if detailer_hook is not None and hasattr(detailer_hook, "post_detection"): + segs = detailer_hook.post_detection(segs) + + return segs + + def detect_combined(self, image, threshold, dilation): + mmdet_results = inference_bbox(self.bbox_model, image, threshold) + segmasks = create_segmasks(mmdet_results) + if dilation > 0: + segmasks = dilate_masks(segmasks, dilation) + + return combine_masks(segmasks) + + def setAux(self, x): + pass + + +class MMDetDetectorProvider: + @classmethod + def INPUT_TYPES(s): + bboxs = ["bbox/"+x for x in folder_paths.get_filename_list("mmdets_bbox")] + segms = ["segm/"+x for x in folder_paths.get_filename_list("mmdets_segm")] + return {"required": {"model_name": (bboxs + segms, )}} + RETURN_TYPES = ("BBOX_DETECTOR", "SEGM_DETECTOR") + FUNCTION = "load_mmdet" + + CATEGORY = "ImpactPack" + + def load_mmdet(self, model_name): + mmdet_path = folder_paths.get_full_path("mmdets", model_name) + model = load_mmdet(mmdet_path) + + if model_name.startswith("bbox"): + return BBoxDetector(model), NO_SEGM_DETECTOR() + else: + return NO_BBOX_DETECTOR(), model \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/onnx.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/onnx.py new file mode 100644 index 0000000000000000000000000000000000000000..91736a1ac4913220ff1255bf0c463523840b4283 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/onnx.py @@ -0,0 +1,38 @@ +import impact.additional_dependencies +from impact.utils import * + +impact.additional_dependencies.ensure_onnx_package() + +try: + import onnxruntime + + def onnx_inference(image, onnx_model): + # prepare image + pil = tensor2pil(image) + image = np.ascontiguousarray(pil) + image = image[:, :, ::-1] # to BGR image + image = image.astype(np.float32) + image -= [103.939, 116.779, 123.68] # 'caffe' mode image preprocessing + + # do detection + onnx_model = onnxruntime.InferenceSession(onnx_model, providers=["CPUExecutionProvider"]) + outputs = onnx_model.run( + [s_i.name for s_i in onnx_model.get_outputs()], + {onnx_model.get_inputs()[0].name: np.expand_dims(image, axis=0)}, + ) + + labels = [op for op in outputs if op.dtype == "int32"][0] + scores = [op for op in outputs if isinstance(op[0][0], np.float32)][0] + boxes = [op for op in outputs if isinstance(op[0][0], np.ndarray)][0] + + # filter-out useless item + idx = np.where(labels[0] == -1)[0][0] + + labels = labels[0][:idx] + scores = scores[0][:idx] + boxes = boxes[0][:idx].astype(np.uint32) + + return labels, scores, boxes +except Exception as e: + print("[ERROR] ComfyUI-Impact-Pack: 'onnxruntime' package doesn't support 'python 3.11', yet.") + print(f"\t{e}") diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/pipe.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/pipe.py new file mode 100644 index 0000000000000000000000000000000000000000..2f6ca7ee305de706f511f2323068334111250fa9 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/pipe.py @@ -0,0 +1,422 @@ +import folder_paths +import impact.wildcards + +class ToDetailerPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "bbox_detector": ("BBOX_DETECTOR", ), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"),), + "Select to add Wildcard": (["Select the Wildcard to add to the text"], ), + }, + "optional": { + "sam_model_opt": ("SAM_MODEL",), + "segm_detector_opt": ("SEGM_DETECTOR",), + "detailer_hook": ("DETAILER_HOOK",), + }} + + RETURN_TYPES = ("DETAILER_PIPE", ) + RETURN_NAMES = ("detailer_pipe", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, *args, **kwargs): + pipe = (kwargs['model'], kwargs['clip'], kwargs['vae'], kwargs['positive'], kwargs['negative'], kwargs['wildcard'], kwargs['bbox_detector'], + kwargs.get('segm_detector_opt', None), kwargs.get('sam_model_opt', None), kwargs.get('detailer_hook', None), + kwargs.get('refiner_model', None), kwargs.get('refiner_clip', None), + kwargs.get('refiner_positive', None), kwargs.get('refiner_negative', None)) + return (pipe, ) + + +class ToDetailerPipeSDXL(ToDetailerPipe): + @classmethod + def INPUT_TYPES(s): + return {"required": { + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "refiner_model": ("MODEL",), + "refiner_clip": ("CLIP",), + "refiner_positive": ("CONDITIONING",), + "refiner_negative": ("CONDITIONING",), + "bbox_detector": ("BBOX_DETECTOR", ), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"),), + "Select to add Wildcard": (["Select the Wildcard to add to the text"],), + }, + "optional": { + "sam_model_opt": ("SAM_MODEL",), + "segm_detector_opt": ("SEGM_DETECTOR",), + "detailer_hook": ("DETAILER_HOOK",), + }} + + +class FromDetailerPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": {"detailer_pipe": ("DETAILER_PIPE",), }, } + + RETURN_TYPES = ("MODEL", "CLIP", "VAE", "CONDITIONING", "CONDITIONING", "BBOX_DETECTOR", "SAM_MODEL", "SEGM_DETECTOR", "DETAILER_HOOK") + RETURN_NAMES = ("model", "clip", "vae", "positive", "negative", "bbox_detector", "sam_model_opt", "segm_detector_opt", "detailer_hook") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, detailer_pipe): + model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, _, _, _, _ = detailer_pipe + return model, clip, vae, positive, negative, bbox_detector, sam_model_opt, segm_detector_opt, detailer_hook + + +class FromDetailerPipe_v2: + @classmethod + def INPUT_TYPES(s): + return {"required": {"detailer_pipe": ("DETAILER_PIPE",), }, } + + RETURN_TYPES = ("DETAILER_PIPE", "MODEL", "CLIP", "VAE", "CONDITIONING", "CONDITIONING", "BBOX_DETECTOR", "SAM_MODEL", "SEGM_DETECTOR", "DETAILER_HOOK") + RETURN_NAMES = ("detailer_pipe", "model", "clip", "vae", "positive", "negative", "bbox_detector", "sam_model_opt", "segm_detector_opt", "detailer_hook") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, detailer_pipe): + model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, _, _, _, _ = detailer_pipe + return detailer_pipe, model, clip, vae, positive, negative, bbox_detector, sam_model_opt, segm_detector_opt, detailer_hook + + +class FromDetailerPipe_SDXL: + @classmethod + def INPUT_TYPES(s): + return {"required": {"detailer_pipe": ("DETAILER_PIPE",), }, } + + RETURN_TYPES = ("DETAILER_PIPE", "MODEL", "CLIP", "VAE", "CONDITIONING", "CONDITIONING", "BBOX_DETECTOR", "SAM_MODEL", "SEGM_DETECTOR", "DETAILER_HOOK", "MODEL", "CLIP", "CONDITIONING", "CONDITIONING") + RETURN_NAMES = ("detailer_pipe", "model", "clip", "vae", "positive", "negative", "bbox_detector", "sam_model_opt", "segm_detector_opt", "detailer_hook", "refiner_model", "refiner_clip", "refiner_positive", "refiner_negative") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, detailer_pipe): + model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, refiner_model, refiner_clip, refiner_positive, refiner_negative = detailer_pipe + return detailer_pipe, model, clip, vae, positive, negative, bbox_detector, sam_model_opt, segm_detector_opt, detailer_hook, refiner_model, refiner_clip, refiner_positive, refiner_negative + + +class ToBasicPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + }, + } + + RETURN_TYPES = ("BASIC_PIPE", ) + RETURN_NAMES = ("basic_pipe", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, model, clip, vae, positive, negative): + pipe = (model, clip, vae, positive, negative) + return (pipe, ) + + +class FromBasicPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": {"basic_pipe": ("BASIC_PIPE",), }, } + + RETURN_TYPES = ("MODEL", "CLIP", "VAE", "CONDITIONING", "CONDITIONING") + RETURN_NAMES = ("model", "clip", "vae", "positive", "negative") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, basic_pipe): + model, clip, vae, positive, negative = basic_pipe + return model, clip, vae, positive, negative + + +class FromBasicPipe_v2: + @classmethod + def INPUT_TYPES(s): + return {"required": {"basic_pipe": ("BASIC_PIPE",), }, } + + RETURN_TYPES = ("BASIC_PIPE", "MODEL", "CLIP", "VAE", "CONDITIONING", "CONDITIONING") + RETURN_NAMES = ("basic_pipe", "model", "clip", "vae", "positive", "negative") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, basic_pipe): + model, clip, vae, positive, negative = basic_pipe + return basic_pipe, model, clip, vae, positive, negative + + +class BasicPipeToDetailerPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": {"basic_pipe": ("BASIC_PIPE",), + "bbox_detector": ("BBOX_DETECTOR", ), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"),), + "Select to add Wildcard": (["Select the Wildcard to add to the text"],), + }, + "optional": { + "sam_model_opt": ("SAM_MODEL", ), + "segm_detector_opt": ("SEGM_DETECTOR",), + "detailer_hook": ("DETAILER_HOOK",), + }, + } + + RETURN_TYPES = ("DETAILER_PIPE", ) + RETURN_NAMES = ("detailer_pipe", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, *args, **kwargs): + basic_pipe = kwargs['basic_pipe'] + bbox_detector = kwargs['bbox_detector'] + wildcard = kwargs['wildcard'] + sam_model_opt = kwargs.get('sam_model_opt', None) + segm_detector_opt = kwargs.get('segm_detector_opt', None) + detailer_hook = kwargs.get('detailer_hook', None) + + model, clip, vae, positive, negative = basic_pipe + pipe = model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, None, None, None, None + return (pipe, ) + + +class BasicPipeToDetailerPipeSDXL: + @classmethod + def INPUT_TYPES(s): + return {"required": {"base_basic_pipe": ("BASIC_PIPE",), + "refiner_basic_pipe": ("BASIC_PIPE",), + "bbox_detector": ("BBOX_DETECTOR", ), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"),), + "Select to add Wildcard": (["Select the Wildcard to add to the text"],), + }, + "optional": { + "sam_model_opt": ("SAM_MODEL", ), + "segm_detector_opt": ("SEGM_DETECTOR",), + "detailer_hook": ("DETAILER_HOOK",), + }, + } + + RETURN_TYPES = ("DETAILER_PIPE", ) + RETURN_NAMES = ("detailer_pipe", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, *args, **kwargs): + base_basic_pipe = kwargs['base_basic_pipe'] + refiner_basic_pipe = kwargs['refiner_basic_pipe'] + bbox_detector = kwargs['bbox_detector'] + wildcard = kwargs['wildcard'] + sam_model_opt = kwargs.get('sam_model_opt', None) + segm_detector_opt = kwargs.get('segm_detector_opt', None) + detailer_hook = kwargs.get('detailer_hook', None) + + model, clip, vae, positive, negative = base_basic_pipe + refiner_model, refiner_clip, refiner_vae, refiner_positive, refiner_negative = refiner_basic_pipe + pipe = model, clip, vae, positive, negative, wildcard, bbox_detector, segm_detector_opt, sam_model_opt, detailer_hook, refiner_model, refiner_clip, refiner_positive, refiner_negative + return (pipe, ) + + +class DetailerPipeToBasicPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": {"detailer_pipe": ("DETAILER_PIPE",), }} + + RETURN_TYPES = ("BASIC_PIPE", "BASIC_PIPE") + RETURN_NAMES = ("base_basic_pipe", "refiner_basic_pipe") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, detailer_pipe): + model, clip, vae, positive, negative, _, _, _, _, _, refiner_model, refiner_clip, refiner_positive, refiner_negative = detailer_pipe + pipe = model, clip, vae, positive, negative + refiner_pipe = refiner_model, refiner_clip, vae, refiner_positive, refiner_negative + return (pipe, refiner_pipe) + + +class EditBasicPipe: + @classmethod + def INPUT_TYPES(s): + return { + "required": {"basic_pipe": ("BASIC_PIPE",), }, + "optional": { + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + }, + } + + RETURN_TYPES = ("BASIC_PIPE", ) + RETURN_NAMES = ("basic_pipe", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, basic_pipe, model=None, clip=None, vae=None, positive=None, negative=None): + res_model, res_clip, res_vae, res_positive, res_negative = basic_pipe + + if model is not None: + res_model = model + + if clip is not None: + res_clip = clip + + if vae is not None: + res_vae = vae + + if positive is not None: + res_positive = positive + + if negative is not None: + res_negative = negative + + pipe = res_model, res_clip, res_vae, res_positive, res_negative + + return (pipe, ) + + +class EditDetailerPipe: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "detailer_pipe": ("DETAILER_PIPE",), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"),), + "Select to add Wildcard": (["Select the Wildcard to add to the text"],), + }, + "optional": { + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "bbox_detector": ("BBOX_DETECTOR",), + "sam_model": ("SAM_MODEL",), + "segm_detector": ("SEGM_DETECTOR",), + "detailer_hook": ("DETAILER_HOOK",), + }, + } + + RETURN_TYPES = ("DETAILER_PIPE",) + RETURN_NAMES = ("detailer_pipe",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Pipe" + + def doit(self, *args, **kwargs): + detailer_pipe = kwargs['detailer_pipe'] + wildcard = kwargs['wildcard'] + model = kwargs.get('model', None) + clip = kwargs.get('clip', None) + vae = kwargs.get('vae', None) + positive = kwargs.get('positive', None) + negative = kwargs.get('negative', None) + bbox_detector = kwargs.get('bbox_detector', None) + sam_model = kwargs.get('sam_model', None) + segm_detector = kwargs.get('segm_detector', None) + detailer_hook = kwargs.get('detailer_hook', None) + refiner_model = kwargs.get('refiner_model', None) + refiner_clip = kwargs.get('refiner_clip', None) + refiner_positive = kwargs.get('refiner_positive', None) + refiner_negative = kwargs.get('refiner_negative', None) + + res_model, res_clip, res_vae, res_positive, res_negative, res_wildcard, res_bbox_detector, res_segm_detector, res_sam_model, res_detailer_hook, res_refiner_model, res_refiner_clip, res_refiner_positive, res_refiner_negative = detailer_pipe + + if model is not None: + res_model = model + + if clip is not None: + res_clip = clip + + if vae is not None: + res_vae = vae + + if positive is not None: + res_positive = positive + + if negative is not None: + res_negative = negative + + if bbox_detector is not None: + res_bbox_detector = bbox_detector + + if segm_detector is not None: + res_segm_detector = segm_detector + + if wildcard != "": + res_wildcard = wildcard + + if sam_model is not None: + res_sam_model = sam_model + + if detailer_hook is not None: + res_detailer_hook = detailer_hook + + if refiner_model is not None: + res_refiner_model = refiner_model + + if refiner_clip is not None: + res_refiner_clip = refiner_clip + + if refiner_positive is not None: + res_refiner_positive = refiner_positive + + if refiner_negative is not None: + res_refiner_negative = refiner_negative + + pipe = (res_model, res_clip, res_vae, res_positive, res_negative, res_wildcard, + res_bbox_detector, res_segm_detector, res_sam_model, res_detailer_hook, + res_refiner_model, res_refiner_clip, res_refiner_positive, res_refiner_negative) + + return (pipe, ) + + +class EditDetailerPipeSDXL(EditDetailerPipe): + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "detailer_pipe": ("DETAILER_PIPE",), + "wildcard": ("STRING", {"multiline": True, "dynamicPrompts": False}), + "Select to add LoRA": (["Select the LoRA to add to the text"] + folder_paths.get_filename_list("loras"),), + "Select to add Wildcard": (["Select the Wildcard to add to the text"],), + }, + "optional": { + "model": ("MODEL",), + "clip": ("CLIP",), + "vae": ("VAE",), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "refiner_model": ("MODEL",), + "refiner_clip": ("CLIP",), + "refiner_positive": ("CONDITIONING",), + "refiner_negative": ("CONDITIONING",), + "bbox_detector": ("BBOX_DETECTOR",), + "sam_model": ("SAM_MODEL",), + "segm_detector": ("SEGM_DETECTOR",), + "detailer_hook": ("DETAILER_HOOK",), + }, + } diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py new file mode 100644 index 0000000000000000000000000000000000000000..01b5600671e4bc5620251eab6f0c5a4ffe625571 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py @@ -0,0 +1,25 @@ +import comfy.sample +import traceback + +original_sample = comfy.sample.sample + + +def informative_sample(*args, **kwargs): + try: + return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. + except RuntimeError as e: + is_model_mix_issue = False + try: + if 'mat1 and mat2 shapes cannot be multiplied' in e.args[0]: + if 'torch.nn.functional.linear' in traceback.format_exc().strip().split('\n')[-3]: + is_model_mix_issue = True + except: + pass + + if is_model_mix_issue: + raise RuntimeError("\n\n#### It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. ####\n\n") + else: + raise e + + +comfy.sample.sample = informative_sample diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/segs_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/segs_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..06acb04926165627aa61e613152749eeccb66511 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/segs_nodes.py @@ -0,0 +1,1543 @@ +import os +import sys + +import impact.impact_server +from nodes import MAX_RESOLUTION + +from impact.utils import * +import impact.core as core +from impact.core import SEG +import impact.utils as utils +from . import defs + + +class SEGSDetailer: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "segs": ("SEGS", ), + "guide_size": ("FLOAT", {"default": 256, "min": 64, "max": MAX_RESOLUTION, "step": 8}), + "guide_size_for": ("BOOLEAN", {"default": True, "label_on": "bbox", "label_off": "crop_region"}), + "max_size": ("FLOAT", {"default": 768, "min": 64, "max": MAX_RESOLUTION, "step": 8}), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "denoise": ("FLOAT", {"default": 0.5, "min": 0.0001, "max": 1.0, "step": 0.01}), + "noise_mask": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "force_inpaint": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "basic_pipe": ("BASIC_PIPE",), + "refiner_ratio": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0}), + "batch_size": ("INT", {"default": 1, "min": 1, "max": 100}), + + "cycle": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), + }, + "optional": { + "refiner_basic_pipe_opt": ("BASIC_PIPE",), + "inpaint_model": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "noise_mask_feather": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + } + } + + RETURN_TYPES = ("SEGS", "IMAGE") + RETURN_NAMES = ("segs", "cnet_images") + OUTPUT_IS_LIST = (False, True) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + @staticmethod + def do_detail(image, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, noise_mask, force_inpaint, basic_pipe, refiner_ratio=None, batch_size=1, cycle=1, + refiner_basic_pipe_opt=None, inpaint_model=False, noise_mask_feather=0): + + model, clip, vae, positive, negative = basic_pipe + if refiner_basic_pipe_opt is None: + refiner_model, refiner_clip, refiner_positive, refiner_negative = None, None, None, None + else: + refiner_model, refiner_clip, _, refiner_positive, refiner_negative = refiner_basic_pipe_opt + + segs = core.segs_scale_match(segs, image.shape) + + new_segs = [] + cnet_pil_list = [] + + for i in range(batch_size): + seed += 1 + for seg in segs[1]: + cropped_image = seg.cropped_image if seg.cropped_image is not None \ + else crop_ndarray4(image.numpy(), seg.crop_region) + cropped_image = to_tensor(cropped_image) + + is_mask_all_zeros = (seg.cropped_mask == 0).all().item() + if is_mask_all_zeros: + print(f"Detailer: segment skip [empty mask]") + new_segs.append(seg) + continue + + if noise_mask: + cropped_mask = seg.cropped_mask + else: + cropped_mask = None + + enhanced_image, cnet_pils = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for, max_size, + seg.bbox, seed, steps, cfg, sampler_name, scheduler, + positive, negative, denoise, cropped_mask, force_inpaint, + refiner_ratio=refiner_ratio, refiner_model=refiner_model, + refiner_clip=refiner_clip, refiner_positive=refiner_positive, refiner_negative=refiner_negative, + control_net_wrapper=seg.control_net_wrapper, cycle=cycle, + inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + if cnet_pils is not None: + cnet_pil_list.extend(cnet_pils) + + if enhanced_image is None: + new_cropped_image = cropped_image + else: + new_cropped_image = enhanced_image + + new_seg = SEG(to_numpy(new_cropped_image), seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, None) + new_segs.append(new_seg) + + return (segs[0], new_segs), cnet_pil_list + + def doit(self, image, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, scheduler, + denoise, noise_mask, force_inpaint, basic_pipe, refiner_ratio=None, batch_size=1, cycle=1, + refiner_basic_pipe_opt=None, inpaint_model=False, noise_mask_feather=0): + + if len(image) > 1: + raise Exception('[Impact Pack] ERROR: SEGSDetailer does not allow image batches.\nPlease refer to https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/batching-detailer.md for more information.') + + segs, cnet_pil_list = SEGSDetailer.do_detail(image, segs, guide_size, guide_size_for, max_size, seed, steps, cfg, sampler_name, + scheduler, denoise, noise_mask, force_inpaint, basic_pipe, refiner_ratio, batch_size, cycle=cycle, + refiner_basic_pipe_opt=refiner_basic_pipe_opt, + inpaint_model=inpaint_model, noise_mask_feather=noise_mask_feather) + + # set fallback image + if len(cnet_pil_list) == 0: + cnet_pil_list = [empty_pil_tensor()] + + return (segs, cnet_pil_list) + + +class SEGSPaste: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "image": ("IMAGE", ), + "segs": ("SEGS", ), + "feather": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), + "alpha": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}), + }, + "optional": {"ref_image_opt": ("IMAGE", ), } + } + + RETURN_TYPES = ("IMAGE", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Detailer" + + @staticmethod + def doit(image, segs, feather, alpha=255, ref_image_opt=None): + + segs = core.segs_scale_match(segs, image.shape) + + result = None + for i, single_image in enumerate(image): + image_i = single_image.unsqueeze(0).clone() + + for seg in segs[1]: + ref_image = None + if ref_image_opt is None and seg.cropped_image is not None: + cropped_image = seg.cropped_image + if isinstance(cropped_image, np.ndarray): + cropped_image = torch.from_numpy(cropped_image) + ref_image = cropped_image[i].unsqueeze(0) + elif ref_image_opt is not None: + ref_tensor = ref_image_opt[i].unsqueeze(0) + ref_image = crop_image(ref_tensor, seg.crop_region) + if ref_image is not None: + if seg.cropped_mask.ndim == 3 and len(seg.cropped_mask) == len(image): + mask = seg.cropped_mask[i] + elif seg.cropped_mask.ndim == 3 and len(seg.cropped_mask) > 1: + print(f"[Impact Pack] WARN: SEGSPaste - The number of the mask batch({len(seg.cropped_mask)}) and the image batch({len(image)}) are different. Combine the mask frames and apply.") + combined_mask = (seg.cropped_mask[0] * 255).to(torch.uint8) + + for frame_mask in seg.cropped_mask[1:]: + combined_mask |= (frame_mask * 255).to(torch.uint8) + + combined_mask = (combined_mask/255.0).to(torch.float32) + mask = utils.to_binary_mask(combined_mask, 0.1) + else: # ndim == 2 + mask = seg.cropped_mask + + mask = tensor_gaussian_blur_mask(mask, feather) * (alpha/255) + x, y, *_ = seg.crop_region + tensor_paste(image_i, ref_image, (x, y), mask) + + if result is None: + result = image_i + else: + result = torch.concat((result, image_i), dim=0) + + return (result, ) + + +class SEGSPreviewCNet: + def __init__(self): + self.output_dir = folder_paths.get_temp_directory() + self.type = "temp" + + @classmethod + def INPUT_TYPES(s): + return {"required": {"segs": ("SEGS", ),}, } + + RETURN_TYPES = ("IMAGE", ) + OUTPUT_IS_LIST = (True, ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + OUTPUT_NODE = True + + def doit(self, segs): + full_output_folder, filename, counter, subfolder, filename_prefix = \ + folder_paths.get_save_image_path("impact_seg_preview", self.output_dir, segs[0][1], segs[0][0]) + + results = list() + result_image_list = [] + + for seg in segs[1]: + file = f"{filename}_{counter:05}_.webp" + + if seg.control_net_wrapper is not None and seg.control_net_wrapper.control_image is not None: + cnet_image = seg.control_net_wrapper.control_image + result_image_list.append(cnet_image) + else: + cnet_image = empty_pil_tensor(64, 64) + + cnet_pil = utils.tensor2pil(cnet_image) + cnet_pil.save(os.path.join(full_output_folder, file)) + + results.append({ + "filename": file, + "subfolder": subfolder, + "type": self.type + }) + + counter += 1 + + return {"ui": {"images": results}, "result": (result_image_list,)} + + +class SEGSPreview: + def __init__(self): + self.output_dir = folder_paths.get_temp_directory() + self.type = "temp" + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "alpha_mode": ("BOOLEAN", {"default": True, "label_on": "enable", "label_off": "disable"}), + "min_alpha": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "optional": { + "fallback_image_opt": ("IMAGE", ), + } + } + + RETURN_TYPES = ("IMAGE", ) + OUTPUT_IS_LIST = (True, ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + OUTPUT_NODE = True + + def doit(self, segs, alpha_mode=True, min_alpha=0.0, fallback_image_opt=None): + full_output_folder, filename, counter, subfolder, filename_prefix = \ + folder_paths.get_save_image_path("impact_seg_preview", self.output_dir, segs[0][1], segs[0][0]) + + results = list() + result_image_list = [] + + if fallback_image_opt is not None: + segs = core.segs_scale_match(segs, fallback_image_opt.shape) + + if min_alpha != 0: + min_alpha = int(255 * min_alpha) + + if len(segs[1]) > 0: + if segs[1][0].cropped_image is not None: + batch_count = len(segs[1][0].cropped_image) + elif fallback_image_opt is not None: + batch_count = len(fallback_image_opt) + else: + return {"ui": {"images": results}} + + for seg in segs[1]: + result_image_batch = None + cached_mask = None + + def get_combined_mask(): + nonlocal cached_mask + + if cached_mask is not None: + return cached_mask + else: + if isinstance(seg.cropped_mask, np.ndarray): + masks = torch.tensor(seg.cropped_mask) + else: + masks = seg.cropped_mask + + cached_mask = (masks[0] * 255).to(torch.uint8) + for x in masks[1:]: + cached_mask |= (x * 255).to(torch.uint8) + cached_mask = (cached_mask/255.0).to(torch.float32) + cached_mask = utils.to_binary_mask(cached_mask, 0.1) + cached_mask = cached_mask.numpy() + + return cached_mask + + def stack_image(image, mask=None): + nonlocal result_image_batch + + if isinstance(image, np.ndarray): + image = torch.from_numpy(image) + + if mask is not None: + image *= torch.tensor(mask)[None, ..., None] + + if result_image_batch is None: + result_image_batch = image + else: + result_image_batch = torch.concat((result_image_batch, image), dim=0) + + for i in range(batch_count): + cropped_image = None + + if seg.cropped_image is not None: + cropped_image = seg.cropped_image[i, None] + elif fallback_image_opt is not None: + # take from original image + ref_image = fallback_image_opt[i].unsqueeze(0) + cropped_image = crop_image(ref_image, seg.crop_region) + + if cropped_image is not None: + if isinstance(cropped_image, np.ndarray): + cropped_image = torch.from_numpy(cropped_image) + + cropped_image = cropped_image.clone() + cropped_pil = to_pil(cropped_image) + + if alpha_mode: + if isinstance(seg.cropped_mask, np.ndarray): + cropped_mask = seg.cropped_mask + else: + if seg.cropped_image is not None and len(seg.cropped_image) != len(seg.cropped_mask): + cropped_mask = get_combined_mask() + else: + cropped_mask = seg.cropped_mask[i].numpy() + + mask_array = (cropped_mask * 255).astype(np.uint8) + + if min_alpha != 0: + mask_array[mask_array < min_alpha] = min_alpha + + mask_pil = Image.fromarray(mask_array, mode='L').resize(cropped_pil.size) + cropped_pil.putalpha(mask_pil) + stack_image(cropped_image, cropped_mask) + else: + stack_image(cropped_image) + + file = f"{filename}_{counter:05}_.webp" + cropped_pil.save(os.path.join(full_output_folder, file)) + results.append({ + "filename": file, + "subfolder": subfolder, + "type": self.type + }) + + counter += 1 + + if result_image_batch is not None: + result_image_list.append(result_image_batch) + + return {"ui": {"images": results}, "result": (result_image_list,) } + + +class SEGSLabelFilter: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "preset": (['all'] + defs.detection_labels, ), + "labels": ("STRING", {"multiline": True, "placeholder": "List the types of segments to be allowed, separated by commas"}), + }, + } + + RETURN_TYPES = ("SEGS", "SEGS",) + RETURN_NAMES = ("filtered_SEGS", "remained_SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + @staticmethod + def filter(segs, labels): + labels = set([label.strip() for label in labels]) + + if 'all' in labels: + return (segs, (segs[0], []), ) + else: + res_segs = [] + remained_segs = [] + + for x in segs[1]: + if x.label in labels: + res_segs.append(x) + elif 'eyes' in labels and x.label in ['left_eye', 'right_eye']: + res_segs.append(x) + elif 'eyebrows' in labels and x.label in ['left_eyebrow', 'right_eyebrow']: + res_segs.append(x) + elif 'pupils' in labels and x.label in ['left_pupil', 'right_pupil']: + res_segs.append(x) + else: + remained_segs.append(x) + + return ((segs[0], res_segs), (segs[0], remained_segs), ) + + def doit(self, segs, preset, labels): + labels = labels.split(',') + return SEGSLabelFilter.filter(segs, labels) + + +class SEGSLabelAssign: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "labels": ("STRING", {"multiline": True, "placeholder": "List the label to be assigned in order of segs, separated by commas"}), + }, + } + + RETURN_TYPES = ("SEGS",) + RETURN_NAMES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + @staticmethod + def assign(segs, labels): + labels = [label.strip() for label in labels] + + if len(labels) != len(segs[1]): + print(f'Warning (SEGSLabelAssign): length of labels ({len(labels)}) != length of segs ({len(segs[1])})') + + labeled_segs = [] + + idx = 0 + for x in segs[1]: + if len(labels) > idx: + x = x._replace(label=labels[idx]) + labeled_segs.append(x) + idx += 1 + + return ((segs[0], labeled_segs), ) + + def doit(self, segs, labels): + labels = labels.split(',') + return SEGSLabelAssign.assign(segs, labels) + + +class SEGSOrderedFilter: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "target": (["area(=w*h)", "width", "height", "x1", "y1", "x2", "y2"],), + "order": ("BOOLEAN", {"default": True, "label_on": "descending", "label_off": "ascending"}), + "take_start": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "take_count": ("INT", {"default": 1, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + RETURN_TYPES = ("SEGS", "SEGS",) + RETURN_NAMES = ("filtered_SEGS", "remained_SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, target, order, take_start, take_count): + segs_with_order = [] + + for seg in segs[1]: + x1 = seg.crop_region[0] + y1 = seg.crop_region[1] + x2 = seg.crop_region[2] + y2 = seg.crop_region[3] + + if target == "area(=w*h)": + value = (y2 - y1) * (x2 - x1) + elif target == "width": + value = x2 - x1 + elif target == "height": + value = y2 - y1 + elif target == "x1": + value = x1 + elif target == "x2": + value = x2 + elif target == "y1": + value = y1 + else: + value = y2 + + segs_with_order.append((value, seg)) + + if order: + sorted_list = sorted(segs_with_order, key=lambda x: x[0], reverse=True) + else: + sorted_list = sorted(segs_with_order, key=lambda x: x[0], reverse=False) + + result_list = [] + remained_list = [] + + for i, item in enumerate(sorted_list): + if take_start <= i < take_start + take_count: + result_list.append(item[1]) + else: + remained_list.append(item[1]) + + return ((segs[0], result_list), (segs[0], remained_list), ) + + +class SEGSRangeFilter: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "target": (["area(=w*h)", "width", "height", "x1", "y1", "x2", "y2", "length_percent"],), + "mode": ("BOOLEAN", {"default": True, "label_on": "inside", "label_off": "outside"}), + "min_value": ("INT", {"default": 0, "min": 0, "max": sys.maxsize, "step": 1}), + "max_value": ("INT", {"default": 67108864, "min": 0, "max": sys.maxsize, "step": 1}), + }, + } + + RETURN_TYPES = ("SEGS", "SEGS",) + RETURN_NAMES = ("filtered_SEGS", "remained_SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, target, mode, min_value, max_value): + new_segs = [] + remained_segs = [] + + for seg in segs[1]: + x1 = seg.crop_region[0] + y1 = seg.crop_region[1] + x2 = seg.crop_region[2] + y2 = seg.crop_region[3] + + if target == "area(=w*h)": + value = (y2 - y1) * (x2 - x1) + elif target == "length_percent": + h = y2 - y1 + w = x2 - x1 + value = max(h/w, w/h)*100 + print(f"value={value}") + elif target == "width": + value = x2 - x1 + elif target == "height": + value = y2 - y1 + elif target == "x1": + value = x1 + elif target == "x2": + value = x2 + elif target == "y1": + value = y1 + else: + value = y2 + + if mode and min_value <= value <= max_value: + print(f"[in] value={value} / {mode}, {min_value}, {max_value}") + new_segs.append(seg) + elif not mode and (value < min_value or value > max_value): + print(f"[out] value={value} / {mode}, {min_value}, {max_value}") + new_segs.append(seg) + else: + remained_segs.append(seg) + print(f"[filter] value={value} / {mode}, {min_value}, {max_value}") + + return ((segs[0], new_segs), (segs[0], remained_segs), ) + + +class SEGSToImageList: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + }, + "optional": { + "fallback_image_opt": ("IMAGE", ), + } + } + + RETURN_TYPES = ("IMAGE",) + OUTPUT_IS_LIST = (True,) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, fallback_image_opt=None): + results = list() + + if fallback_image_opt is not None: + segs = core.segs_scale_match(segs, fallback_image_opt.shape) + + for seg in segs[1]: + if seg.cropped_image is not None: + cropped_image = to_tensor(seg.cropped_image) + elif fallback_image_opt is not None: + # take from original image + cropped_image = to_tensor(crop_image(fallback_image_opt, seg.crop_region)) + else: + cropped_image = empty_pil_tensor() + + results.append(cropped_image) + + if len(results) == 0: + results.append(empty_pil_tensor()) + + return (results,) + + +class SEGSToMaskList: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + }, + } + + RETURN_TYPES = ("MASK",) + OUTPUT_IS_LIST = (True,) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs): + masks = core.segs_to_masklist(segs) + if len(masks) == 0: + empty_mask = torch.zeros(segs[0], dtype=torch.float32, device="cpu") + masks = [empty_mask] + masks = [utils.make_3d_mask(mask) for mask in masks] + return (masks,) + + +class SEGSToMaskBatch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + }, + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs): + masks = core.segs_to_masklist(segs) + masks = [utils.make_3d_mask(mask) for mask in masks] + mask_batch = torch.concat(masks) + return (mask_batch,) + + +class SEGSConcat: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs1": ("SEGS", ), + }, + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, **kwargs): + dim = None + res = None + + for k, v in list(kwargs.items()): + if v[0] == (0, 0) or len(v[1]) == 0: + continue + + if dim is None: + dim = v[0] + res = v[1] + else: + if v[0] == dim: + res = res + v[1] + else: + print(f"ERROR: source shape of 'segs1'{dim} and '{k}'{v[0]} are different. '{k}' will be ignored") + + if dim is None: + empty_segs = ((0, 0), []) + return (empty_segs, ) + else: + return ((dim, res), ) + + +class DecomposeSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + }, + } + + RETURN_TYPES = ("SEGS_HEADER", "SEG_ELT",) + OUTPUT_IS_LIST = (False, True, ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs): + return segs + + +class AssembleSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seg_header": ("SEGS_HEADER", ), + "seg_elt": ("SEG_ELT", ), + }, + } + + INPUT_IS_LIST = True + + RETURN_TYPES = ("SEGS", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, seg_header, seg_elt): + return ((seg_header[0], seg_elt), ) + + +class From_SEG_ELT: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seg_elt": ("SEG_ELT", ), + }, + } + + RETURN_TYPES = ("SEG_ELT", "IMAGE", "MASK", "SEG_ELT_crop_region", "SEG_ELT_bbox", "SEG_ELT_control_net_wrapper", "FLOAT", "STRING") + RETURN_NAMES = ("seg_elt", "cropped_image", "cropped_mask", "crop_region", "bbox", "control_net_wrapper", "confidence", "label") + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, seg_elt): + cropped_image = to_tensor(seg_elt.cropped_image) if seg_elt.cropped_image is not None else None + return (seg_elt, cropped_image, to_tensor(seg_elt.cropped_mask), seg_elt.crop_region, seg_elt.bbox, seg_elt.control_net_wrapper, seg_elt.confidence, seg_elt.label,) + + +class Edit_SEG_ELT: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seg_elt": ("SEG_ELT", ), + }, + "optional": { + "cropped_image_opt": ("IMAGE", ), + "cropped_mask_opt": ("MASK", ), + "crop_region_opt": ("SEG_ELT_crop_region", ), + "bbox_opt": ("SEG_ELT_bbox", ), + "control_net_wrapper_opt": ("SEG_ELT_control_net_wrapper", ), + "confidence_opt": ("FLOAT", {"min": 0, "max": 1.0, "step": 0.1, "forceInput": True}), + "label_opt": ("STRING", {"multiline": False, "forceInput": True}), + } + } + + RETURN_TYPES = ("SEG_ELT", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, seg_elt, cropped_image_opt=None, cropped_mask_opt=None, confidence_opt=None, crop_region_opt=None, + bbox_opt=None, label_opt=None, control_net_wrapper_opt=None): + + cropped_image = seg_elt.cropped_image if cropped_image_opt is None else cropped_image_opt + cropped_mask = seg_elt.cropped_mask if cropped_mask_opt is None else cropped_mask_opt + confidence = seg_elt.confidence if confidence_opt is None else confidence_opt + crop_region = seg_elt.crop_region if crop_region_opt is None else crop_region_opt + bbox = seg_elt.bbox if bbox_opt is None else bbox_opt + label = seg_elt.label if label_opt is None else label_opt + control_net_wrapper = seg_elt.control_net_wrapper if control_net_wrapper_opt is None else control_net_wrapper_opt + + cropped_image = cropped_image.numpy() if cropped_image is not None else None + + if isinstance(cropped_mask, torch.Tensor): + if len(cropped_mask.shape) == 3: + cropped_mask = cropped_mask.squeeze(0) + + cropped_mask = cropped_mask.numpy() + + seg = SEG(cropped_image, cropped_mask, confidence, crop_region, bbox, label, control_net_wrapper) + + return (seg,) + + +class DilateMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK", ), + "dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + }} + + RETURN_TYPES = ("MASK", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, mask, dilation): + mask = core.dilate_mask(mask.numpy(), dilation) + mask = torch.from_numpy(mask) + mask = utils.make_3d_mask(mask) + return (mask, ) + + +class GaussianBlurMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK", ), + "kernel_size": ("INT", {"default": 10, "min": 0, "max": 100, "step": 1}), + "sigma": ("FLOAT", {"default": 10.0, "min": 0.1, "max": 100.0, "step": 0.1}), + }} + + RETURN_TYPES = ("MASK", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, mask, kernel_size, sigma): + # Some custom nodes use abnormal 4-dimensional masks in the format of b, c, h, w. In the impact pack, internal 4-dimensional masks are required in the format of b, h, w, c. Therefore, normalization is performed using the normal mask format, which is 3-dimensional, before proceeding with the operation. + mask = make_3d_mask(mask) + mask = torch.unsqueeze(mask, dim=-1) + mask = utils.tensor_gaussian_blur_mask(mask, kernel_size, sigma) + mask = torch.squeeze(mask, dim=-1) + return (mask, ) + + +class DilateMaskInSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + }} + + RETURN_TYPES = ("SEGS", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, dilation): + new_segs = [] + for seg in segs[1]: + mask = core.dilate_mask(seg.cropped_mask, dilation) + seg = SEG(seg.cropped_image, mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + new_segs.append(seg) + + return ((segs[0], new_segs), ) + + +class GaussianBlurMaskInSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "kernel_size": ("INT", {"default": 10, "min": 0, "max": 100, "step": 1}), + "sigma": ("FLOAT", {"default": 10.0, "min": 0.1, "max": 100.0, "step": 0.1}), + }} + + RETURN_TYPES = ("SEGS", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, kernel_size, sigma): + new_segs = [] + for seg in segs[1]: + mask = utils.tensor_gaussian_blur_mask(seg.cropped_mask, kernel_size, sigma) + mask = torch.squeeze(mask, dim=-1).squeeze(0).numpy() + seg = SEG(seg.cropped_image, mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + new_segs.append(seg) + + return ((segs[0], new_segs), ) + + +class Dilate_SEG_ELT: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seg_elt": ("SEG_ELT", ), + "dilation": ("INT", {"default": 10, "min": -512, "max": 512, "step": 1}), + }} + + RETURN_TYPES = ("SEG_ELT", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, seg, dilation): + mask = core.dilate_mask(seg.cropped_mask, dilation) + seg = SEG(seg.cropped_image, mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + return (seg,) + + +class SEG_ELT_BBOX_ScaleBy: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seg": ("SEG_ELT", ), + "scale_by": ("FLOAT", {"default": 1.0, "min": 0.01, "max": 8.0, "step": 0.01}), } + } + + RETURN_TYPES = ("SEG_ELT", ) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + @staticmethod + def fill_zero_outside_bbox(mask, crop_region, bbox): + cx1, cy1, _, _ = crop_region + x1, y1, x2, y2 = bbox + x1, y1, x2, y2 = x1-cx1, y1-cy1, x2-cx1, y2-cy1 + h, w = mask.shape + + x1 = min(w-1, max(0, x1)) + x2 = min(w-1, max(0, x2)) + y1 = min(h-1, max(0, y1)) + y2 = min(h-1, max(0, y2)) + + mask_cropped = mask.copy() + mask_cropped[:, :x1] = 0 # zero fill left side + mask_cropped[:, x2:] = 0 # zero fill right side + mask_cropped[:y1, :] = 0 # zero fill top side + mask_cropped[y2:, :] = 0 # zero fill bottom side + return mask_cropped + + def doit(self, seg, scale_by): + x1, y1, x2, y2 = seg.bbox + w = x2-x1 + h = y2-y1 + + dw = int((w * scale_by - w)/2) + dh = int((h * scale_by - h)/2) + + bbox = (x1-dw, y1-dh, x2+dw, y2+dh) + + cropped_mask = SEG_ELT_BBOX_ScaleBy.fill_zero_outside_bbox(seg.cropped_mask, seg.crop_region, bbox) + seg = SEG(seg.cropped_image, cropped_mask, seg.confidence, seg.crop_region, bbox, seg.label, seg.control_net_wrapper) + return (seg,) + + +class EmptySEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": {}, } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self): + shape = 0, 0 + return ((shape, []),) + + +class SegsToCombinedMask: + @classmethod + def INPUT_TYPES(s): + return {"required": {"segs": ("SEGS",), }} + + RETURN_TYPES = ("MASK",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, segs): + mask = core.segs_to_combined_mask(segs) + mask = utils.make_3d_mask(mask) + return (mask,) + + +class MediaPipeFaceMeshToSEGS: + @classmethod + def INPUT_TYPES(s): + bool_true_widget = ("BOOLEAN", {"default": True, "label_on": "Enabled", "label_off": "Disabled"}) + bool_false_widget = ("BOOLEAN", {"default": False, "label_on": "Enabled", "label_off": "Disabled"}) + return {"required": { + "image": ("IMAGE",), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "bbox_fill": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "crop_min_size": ("INT", {"min": 10, "max": MAX_RESOLUTION, "step": 1, "default": 50}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 1}), + "dilation": ("INT", {"default": 0, "min": -512, "max": 512, "step": 1}), + "face": bool_true_widget, + "mouth": bool_false_widget, + "left_eyebrow": bool_false_widget, + "left_eye": bool_false_widget, + "left_pupil": bool_false_widget, + "right_eyebrow": bool_false_widget, + "right_eye": bool_false_widget, + "right_pupil": bool_false_widget, + }, + # "optional": {"reference_image_opt": ("IMAGE", ), } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, image, crop_factor, bbox_fill, crop_min_size, drop_size, dilation, face, mouth, left_eyebrow, left_eye, left_pupil, right_eyebrow, right_eye, right_pupil): + # padding is obsolete now + # https://github.com/Fannovel16/comfyui_controlnet_aux/blob/1ec41fceff1ee99596445a0c73392fd91df407dc/utils.py#L33 + # def calc_pad(h_raw, w_raw): + # resolution = normalize_size_base_64(h_raw, w_raw) + # + # def pad64(x): + # return int(np.ceil(float(x) / 64.0) * 64 - x) + # + # k = float(resolution) / float(min(h_raw, w_raw)) + # h_target = int(np.round(float(h_raw) * k)) + # w_target = int(np.round(float(w_raw) * k)) + # + # return pad64(h_target), pad64(w_target) + + # if reference_image_opt is not None: + # if image.shape[1:] != reference_image_opt.shape[1:]: + # scale_by1 = reference_image_opt.shape[1] / image.shape[1] + # scale_by2 = reference_image_opt.shape[2] / image.shape[2] + # scale_by = min(scale_by1, scale_by2) + # + # # padding is obsolete now + # # h_pad, w_pad = calc_pad(reference_image_opt.shape[1], reference_image_opt.shape[2]) + # # if h_pad != 0: + # # # height padded + # # image = image[:, :-h_pad, :, :] + # # elif w_pad != 0: + # # # width padded + # # image = image[:, :, :-w_pad, :] + # + # image = nodes.ImageScaleBy().upscale(image, "bilinear", scale_by)[0] + + result = core.mediapipe_facemesh_to_segs(image, crop_factor, bbox_fill, crop_min_size, drop_size, dilation, face, mouth, left_eyebrow, left_eye, left_pupil, right_eyebrow, right_eye, right_pupil) + return (result, ) + + +class MaskToSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK",), + "combined": ("BOOLEAN", {"default": False, "label_on": "True", "label_off": "False"}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "bbox_fill": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + "contour_fill": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask, combined, crop_factor, bbox_fill, drop_size, contour_fill=False): + mask = make_2d_mask(mask) + + result = core.mask_to_segs(mask, combined, crop_factor, bbox_fill, drop_size, is_contour=contour_fill) + return (result, ) + + +class MaskToSEGS_for_AnimateDiff: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK",), + "combined": ("BOOLEAN", {"default": False, "label_on": "True", "label_off": "False"}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 100, "step": 0.1}), + "bbox_fill": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "drop_size": ("INT", {"min": 1, "max": MAX_RESOLUTION, "step": 1, "default": 10}), + "contour_fill": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask, combined, crop_factor, bbox_fill, drop_size, contour_fill=False): + mask = make_2d_mask(mask) + + segs = core.mask_to_segs(mask, combined, crop_factor, bbox_fill, drop_size, is_contour=contour_fill) + + all_masks = SEGSToMaskList().doit(segs)[0] + + result_mask = (all_masks[0] * 255).to(torch.uint8) + for mask in all_masks[1:]: + result_mask |= (mask * 255).to(torch.uint8) + + result_mask = (result_mask/255.0).to(torch.float32) + result_mask = utils.to_binary_mask(result_mask, 0.1)[0] + + return MaskToSEGS().doit(result_mask, False, crop_factor, False, drop_size, contour_fill) + + +class ControlNetApplySEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS",), + "control_net": ("CONTROL_NET",), + "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01}), + }, + "optional": { + "segs_preprocessor": ("SEGS_PREPROCESSOR",), + "control_image": ("IMAGE",) + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, control_net, strength, segs_preprocessor=None, control_image=None): + new_segs = [] + + for seg in segs[1]: + control_net_wrapper = core.ControlNetWrapper(control_net, strength, segs_preprocessor, seg.control_net_wrapper, + original_size=segs[0], crop_region=seg.crop_region, control_image=control_image) + new_seg = SEG(seg.cropped_image, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, control_net_wrapper) + new_segs.append(new_seg) + + return ((segs[0], new_segs), ) + + +class ControlNetApplyAdvancedSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS",), + "control_net": ("CONTROL_NET",), + "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01}), + "start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}), + "end_percent": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.001}) + }, + "optional": { + "segs_preprocessor": ("SEGS_PREPROCESSOR",), + "control_image": ("IMAGE",) + } + } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, control_net, strength, start_percent, end_percent, segs_preprocessor=None, control_image=None): + new_segs = [] + + for seg in segs[1]: + control_net_wrapper = core.ControlNetAdvancedWrapper(control_net, strength, start_percent, end_percent, segs_preprocessor, + seg.control_net_wrapper, original_size=segs[0], crop_region=seg.crop_region, + control_image=control_image) + new_seg = SEG(seg.cropped_image, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, control_net_wrapper) + new_segs.append(new_seg) + + return ((segs[0], new_segs), ) + + +class ControlNetClearSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": {"segs": ("SEGS",), }, } + + RETURN_TYPES = ("SEGS",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs): + new_segs = [] + + for seg in segs[1]: + new_seg = SEG(seg.cropped_image, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, None) + new_segs.append(new_seg) + + return ((segs[0], new_segs), ) + + +class SEGSSwitch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "select": ("INT", {"default": 1, "min": 1, "max": 99999, "step": 1}), + "segs1": ("SEGS",), + }, + } + + RETURN_TYPES = ("SEGS", ) + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, *args, **kwargs): + input_name = f"segs{int(kwargs['select'])}" + + if input_name in kwargs: + return (kwargs[input_name],) + else: + print(f"SEGSSwitch: invalid select index ('segs1' is selected)") + return (kwargs['segs1'],) + + +class SEGSPicker: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "picks": ("STRING", {"multiline": True, "dynamicPrompts": False, "pysssss.autocomplete": False}), + "segs": ("SEGS",), + }, + "optional": { + "fallback_image_opt": ("IMAGE", ), + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("SEGS", ) + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, picks, segs, fallback_image_opt=None, unique_id=None): + if fallback_image_opt is not None: + segs = core.segs_scale_match(segs, fallback_image_opt.shape) + + # generate candidates image + cands = [] + for seg in segs[1]: + if seg.cropped_image is not None: + cropped_image = seg.cropped_image + elif fallback_image_opt is not None: + # take from original image + cropped_image = crop_image(fallback_image_opt, seg.crop_region) + else: + cropped_image = empty_pil_tensor() + + mask_array = seg.cropped_mask + mask_array[mask_array < 0.3] = 0.3 + mask_array = mask_array[None, ..., None] + cropped_image = cropped_image * mask_array + + cands.append(cropped_image) + + impact.impact_server.segs_picker_map[unique_id] = cands + + # pass only selected + pick_ids = set() + + for pick in picks.split(","): + try: + pick_ids.add(int(pick)-1) + except Exception: + pass + + new_segs = [] + for i in pick_ids: + if 0 <= i < len(segs[1]): + new_segs.append(segs[1][i]) + + return ((segs[0], new_segs),) + + +class DefaultImageForSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "segs": ("SEGS", ), + "image": ("IMAGE", ), + "override": ("BOOLEAN", {"default": True}), + }} + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs, image, override): + results = [] + + segs = core.segs_scale_match(segs, image.shape) + + if len(segs[1]) > 0: + if segs[1][0].cropped_image is not None: + batch_count = len(segs[1][0].cropped_image) + else: + batch_count = len(image) + + for seg in segs[1]: + if seg.cropped_image is not None and not override: + cropped_image = seg.cropped_image + else: + cropped_image = None + for i in range(0, batch_count): + # take from original image + ref_image = image[i].unsqueeze(0) + cropped_image2 = crop_image(ref_image, seg.crop_region) + + if cropped_image is None: + cropped_image = cropped_image2 + else: + cropped_image = torch.cat((cropped_image, cropped_image2), dim=0) + + new_seg = SEG(cropped_image, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + results.append(new_seg) + + return ((segs[0], results), ) + else: + return (segs, ) + + +class RemoveImageFromSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": {"segs": ("SEGS", ), }} + + RETURN_TYPES = ("SEGS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, segs): + results = [] + + if len(segs[1]) > 0: + for seg in segs[1]: + new_seg = SEG(None, seg.cropped_mask, seg.confidence, seg.crop_region, seg.bbox, seg.label, seg.control_net_wrapper) + results.append(new_seg) + + return ((segs[0], results), ) + else: + return (segs, ) + + +class MakeTileSEGS: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "images": ("IMAGE", ), + "bbox_size": ("INT", {"default": 512, "min": 64, "max": 4096, "step": 8}), + "crop_factor": ("FLOAT", {"default": 3.0, "min": 1.0, "max": 10, "step": 0.1}), + "min_overlap": ("INT", {"default": 5, "min": 0, "max": 512, "step": 1}), + "filter_segs_dilation": ("INT", {"default": 20, "min": -255, "max": 255, "step": 1}), + "mask_irregularity": ("FLOAT", {"default": 0, "min": 0, "max": 1.0, "step": 0.01}), + "irregular_mask_mode": (["Reuse fast", "Reuse quality", "All random fast", "All random quality"],) + }, + "optional": { + "filter_in_segs_opt": ("SEGS", ), + "filter_out_segs_opt": ("SEGS", ), + } + } + + RETURN_TYPES = ("SEGS",) + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/__for_testing" + + def doit(self, images, bbox_size, crop_factor, min_overlap, filter_segs_dilation, mask_irregularity=0, irregular_mask_mode="Reuse fast", filter_in_segs_opt=None, filter_out_segs_opt=None): + if bbox_size <= 2*min_overlap: + new_min_overlap = 2 / bbox_size + print(f"[MakeTileSEGS] min_overlap should be greater than bbox_size. (value changed: {min_overlap} => {new_min_overlap})") + min_overlap = new_min_overlap + + _, ih, iw, _ = images.size() + + mask_cache = None + mask_quality = 512 + if mask_irregularity > 0: + if irregular_mask_mode == "Reuse fast": + mask_quality = 128 + mask_cache = np.zeros((128, 128)).astype(np.float32) + core.random_mask(mask_cache, (0, 0, 128, 128), factor=mask_irregularity, size=mask_quality) + elif irregular_mask_mode == "Reuse quality": + mask_quality = 512 + mask_cache = np.zeros((512, 512)).astype(np.float32) + core.random_mask(mask_cache, (0, 0, 512, 512), factor=mask_irregularity, size=mask_quality) + elif irregular_mask_mode == "All random fast": + mask_quality = 512 + + # create exclusion mask + if filter_out_segs_opt is not None: + exclusion_mask = core.segs_to_combined_mask(filter_out_segs_opt) + exclusion_mask = utils.make_3d_mask(exclusion_mask) + exclusion_mask = utils.resize_mask(exclusion_mask, (ih, iw)) + exclusion_mask = dilate_mask(exclusion_mask.cpu().numpy(), filter_segs_dilation) + else: + exclusion_mask = None + + if filter_in_segs_opt is not None: + and_mask = core.segs_to_combined_mask(filter_in_segs_opt) + and_mask = utils.make_3d_mask(and_mask) + and_mask = utils.resize_mask(and_mask, (ih, iw)) + and_mask = dilate_mask(and_mask.cpu().numpy(), filter_segs_dilation) + + a, b = core.mask_to_segs(and_mask, True, 1.0, False, 0) + if len(b) == 0: + return a, b + + start_x, start_y, c, d = b[0].crop_region + w = c - start_x + h = d - start_y + else: + start_x = 0 + start_y = 0 + h, w = ih, iw + and_mask = None + + # calculate tile factors + if bbox_size > h or bbox_size > w: + new_bbox_size = min(bbox_size, min(w, h)) + print(f"[MaskTileSEGS] bbox_size is greater than resolution (value changed: {bbox_size} => {new_bbox_size}") + bbox_size = new_bbox_size + + n_horizontal = int(w / (bbox_size - min_overlap)) + n_vertical = int(h / (bbox_size - min_overlap)) + + w_overlap_sum = (bbox_size * n_horizontal) - w + if w_overlap_sum < 0: + n_horizontal += 1 + w_overlap_sum = (bbox_size * n_horizontal) - w + + w_overlap_size = 0 if n_horizontal == 1 else int(w_overlap_sum/(n_horizontal-1)) + + h_overlap_sum = (bbox_size * n_vertical) - h + if h_overlap_sum < 0: + n_vertical += 1 + h_overlap_sum = (bbox_size * n_vertical) - h + + h_overlap_size = 0 if n_vertical == 1 else int(h_overlap_sum/(n_vertical-1)) + + new_segs = [] + + y = start_y + for j in range(0, n_vertical): + x = start_x + for i in range(0, n_horizontal): + x1 = x + y1 = y + + if x+bbox_size < iw-1: + x2 = x+bbox_size + else: + x2 = iw + x1 = iw-bbox_size + + if y+bbox_size < ih-1: + y2 = y+bbox_size + else: + y2 = ih + y1 = ih-bbox_size + + bbox = x1, y1, x2, y2 + crop_region = make_crop_region(iw, ih, bbox, crop_factor) + cx1, cy1, cx2, cy2 = crop_region + + mask = np.zeros((cy2 - cy1, cx2 - cx1)).astype(np.float32) + + rel_left = x1 - cx1 + rel_top = y1 - cy1 + rel_right = x2 - cx1 + rel_bot = y2 - cy1 + + if mask_irregularity > 0: + if mask_cache is not None: + core.adaptive_mask_paste(mask, mask_cache, (rel_left, rel_top, rel_right, rel_bot)) + else: + core.random_mask(mask, (rel_left, rel_top, rel_right, rel_bot), factor=mask_irregularity, size=mask_quality) + + # corner filling + if rel_left == 0: + pad = int((x2 - x1) / 8) + mask[rel_top:rel_bot, :pad] = 1.0 + + if rel_top == 0: + pad = int((y2 - y1) / 8) + mask[:pad, rel_left:rel_right] = 1.0 + + if rel_right == mask.shape[1]: + pad = int((x2 - x1) / 8) + mask[rel_top:rel_bot, -pad:] = 1.0 + + if rel_bot == mask.shape[0]: + pad = int((y2 - y1) / 8) + mask[-pad:, rel_left:rel_right] = 1.0 + else: + mask[rel_top:rel_bot, rel_left:rel_right] = 1.0 + + mask = torch.tensor(mask) + + if exclusion_mask is not None: + exclusion_mask_cropped = exclusion_mask[cy1:cy2, cx1:cx2] + mask[exclusion_mask_cropped != 0] = 0.0 + + if and_mask is not None: + and_mask_cropped = and_mask[cy1:cy2, cx1:cx2] + mask[and_mask_cropped == 0] = 0.0 + + is_mask_zero = torch.all(mask == 0.0).item() + + if not is_mask_zero: + item = SEG(None, mask.numpy(), 1.0, crop_region, bbox, "", None) + new_segs.append(item) + + x += bbox_size - w_overlap_size + y += bbox_size - h_overlap_size + + res = (ih, iw), new_segs # segs + return (res,) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/special_samplers.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/special_samplers.py new file mode 100644 index 0000000000000000000000000000000000000000..d2dec47168b5c6d96d24fd28e065f8b421511e09 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/special_samplers.py @@ -0,0 +1,598 @@ +import math +import impact.core as core +from impact.utils import * +from nodes import MAX_RESOLUTION +import nodes +from impact.impact_sampling import KSamplerWrapper, KSamplerAdvancedWrapper + + +class TiledKSamplerProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "tile_width": ("INT", {"default": 512, "min": 320, "max": MAX_RESOLUTION, "step": 64}), + "tile_height": ("INT", {"default": 512, "min": 320, "max": MAX_RESOLUTION, "step": 64}), + "tiling_strategy": (["random", "padded", 'simple'], ), + "basic_pipe": ("BASIC_PIPE", ) + }} + + RETURN_TYPES = ("KSAMPLER",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Sampler" + + def doit(self, seed, steps, cfg, sampler_name, scheduler, denoise, + tile_width, tile_height, tiling_strategy, basic_pipe): + model, _, _, positive, negative = basic_pipe + sampler = core.TiledKSamplerWrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise, + tile_width, tile_height, tiling_strategy) + return (sampler, ) + + +class KSamplerProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "basic_pipe": ("BASIC_PIPE", ) + }, + } + + RETURN_TYPES = ("KSAMPLER",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Sampler" + + def doit(self, seed, steps, cfg, sampler_name, scheduler, denoise, basic_pipe): + model, _, _, positive, negative = basic_pipe + sampler = KSamplerWrapper(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, denoise) + return (sampler, ) + + +class KSamplerAdvancedProvider: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "sigma_factor": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01}), + "basic_pipe": ("BASIC_PIPE", ) + }, + "optional": { + "sampler_opt": ("SAMPLER", ) + } + } + + RETURN_TYPES = ("KSAMPLER_ADVANCED",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Sampler" + + def doit(self, cfg, sampler_name, scheduler, basic_pipe, sigma_factor=1.0, sampler_opt=None): + model, _, _, positive, negative = basic_pipe + sampler = KSamplerAdvancedWrapper(model, cfg, sampler_name, scheduler, positive, negative, sampler_opt=sampler_opt, sigma_factor=sigma_factor) + return (sampler, ) + + +class TwoSamplersForMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "latent_image": ("LATENT", ), + "base_sampler": ("KSAMPLER", ), + "mask_sampler": ("KSAMPLER", ), + "mask": ("MASK", ) + }, + } + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Sampler" + + def doit(self, latent_image, base_sampler, mask_sampler, mask): + inv_mask = torch.where(mask != 1.0, torch.tensor(1.0), torch.tensor(0.0)) + + latent_image['noise_mask'] = inv_mask + new_latent_image = base_sampler.sample(latent_image) + + new_latent_image['noise_mask'] = mask + new_latent_image = mask_sampler.sample(new_latent_image) + + del new_latent_image['noise_mask'] + + return (new_latent_image, ) + + +class TwoAdvancedSamplersForMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "samples": ("LATENT", ), + "base_sampler": ("KSAMPLER_ADVANCED", ), + "mask_sampler": ("KSAMPLER_ADVANCED", ), + "mask": ("MASK", ), + "overlap_factor": ("INT", {"default": 10, "min": 0, "max": 10000}) + }, + } + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Sampler" + + @staticmethod + def mask_erosion(samples, mask, grow_mask_by): + mask = mask.clone() + + w = samples['samples'].shape[3] + h = samples['samples'].shape[2] + + mask2 = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(w, h), mode="bilinear") + if grow_mask_by == 0: + mask_erosion = mask2 + else: + kernel_tensor = torch.ones((1, 1, grow_mask_by, grow_mask_by)) + padding = math.ceil((grow_mask_by - 1) / 2) + + mask_erosion = torch.clamp(torch.nn.functional.conv2d(mask2.round(), kernel_tensor, padding=padding), 0, 1) + + return mask_erosion[:, :, :w, :h].round() + + def doit(self, seed, steps, denoise, samples, base_sampler, mask_sampler, mask, overlap_factor): + + inv_mask = torch.where(mask != 1.0, torch.tensor(1.0), torch.tensor(0.0)) + + adv_steps = int(steps / denoise) + start_at_step = adv_steps - steps + + new_latent_image = samples.copy() + + mask_erosion = TwoAdvancedSamplersForMask.mask_erosion(samples, mask, overlap_factor) + + for i in range(start_at_step, adv_steps): + add_noise = "enable" if i == start_at_step else "disable" + return_with_leftover_noise = "enable" if i+1 != adv_steps else "disable" + + new_latent_image['noise_mask'] = inv_mask + new_latent_image = base_sampler.sample_advanced(add_noise, seed, adv_steps, new_latent_image, i, i + 1, "enable", recovery_mode="ratio additional") + + new_latent_image['noise_mask'] = mask_erosion + new_latent_image = mask_sampler.sample_advanced("disable", seed, adv_steps, new_latent_image, i, i + 1, return_with_leftover_noise, recovery_mode="ratio additional") + + del new_latent_image['noise_mask'] + + return (new_latent_image, ) + + +class RegionalPrompt: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK", ), + "advanced_sampler": ("KSAMPLER_ADVANCED", ), + }, + } + + RETURN_TYPES = ("REGIONAL_PROMPTS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Regional" + + def doit(self, mask, advanced_sampler): + regional_prompt = core.REGIONAL_PROMPT(mask, advanced_sampler) + return ([regional_prompt], ) + + +class CombineRegionalPrompts: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "regional_prompts1": ("REGIONAL_PROMPTS", ), + }, + } + + RETURN_TYPES = ("REGIONAL_PROMPTS", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Regional" + + def doit(self, **kwargs): + res = [] + for k, v in kwargs.items(): + res += v + + return (res, ) + + +class CombineConditionings: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "conditioning1": ("CONDITIONING", ), + }, + } + + RETURN_TYPES = ("CONDITIONING", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, **kwargs): + res = [] + for k, v in kwargs.items(): + res += v + + return (res, ) + + +class ConcatConditionings: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "conditioning1": ("CONDITIONING", ), + }, + } + + RETURN_TYPES = ("CONDITIONING", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, **kwargs): + conditioning_to = list(kwargs.values())[0] + + for k, conditioning_from in list(kwargs.items())[1:]: + out = [] + if len(conditioning_from) > 1: + print("Warning: ConcatConditionings {k} contains more than 1 cond, only the first one will actually be applied to conditioning1.") + + cond_from = conditioning_from[0][0] + + for i in range(len(conditioning_to)): + t1 = conditioning_to[i][0] + tw = torch.cat((t1, cond_from), 1) + n = [tw, conditioning_to[i][1].copy()] + out.append(n) + + conditioning_to = out + + return (out, ) + + +class RegionalSampler: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "seed_2nd": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "seed_2nd_mode": (["ignore", "fixed", "seed+seed_2nd", "seed-seed_2nd", "increment", "decrement", "randomize"], ), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "base_only_steps": ("INT", {"default": 2, "min": 0, "max": 10000}), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + "samples": ("LATENT", ), + "base_sampler": ("KSAMPLER_ADVANCED", ), + "regional_prompts": ("REGIONAL_PROMPTS", ), + "overlap_factor": ("INT", {"default": 10, "min": 0, "max": 10000}), + "restore_latent": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "additional_mode": (["DISABLE", "ratio additional", "ratio between"], {"default": "ratio between"}), + "additional_sampler": (["AUTO", "euler", "heun", "heunpp2", "dpm_2", "dpm_fast", "dpmpp_2m", "ddpm"],), + "additional_sigma_ratio": ("FLOAT", {"default": 0.3, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Regional" + + @staticmethod + def mask_erosion(samples, mask, grow_mask_by): + mask = mask.clone() + + w = samples['samples'].shape[3] + h = samples['samples'].shape[2] + + mask2 = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(w, h), mode="bilinear") + if grow_mask_by == 0: + mask_erosion = mask2 + else: + kernel_tensor = torch.ones((1, 1, grow_mask_by, grow_mask_by)) + padding = math.ceil((grow_mask_by - 1) / 2) + + mask_erosion = torch.clamp(torch.nn.functional.conv2d(mask2.round(), kernel_tensor, padding=padding), 0, 1) + + return mask_erosion[:, :, :w, :h].round() + + def doit(self, seed, seed_2nd, seed_2nd_mode, steps, base_only_steps, denoise, samples, base_sampler, regional_prompts, overlap_factor, restore_latent, + additional_mode, additional_sampler, additional_sigma_ratio, unique_id=None): + if restore_latent: + latent_compositor = nodes.NODE_CLASS_MAPPINGS['LatentCompositeMasked']() + else: + latent_compositor = None + + masks = [regional_prompt.mask.numpy() for regional_prompt in regional_prompts] + masks = [np.ceil(mask).astype(np.int32) for mask in masks] + combined_mask = torch.from_numpy(np.bitwise_or.reduce(masks)) + + inv_mask = torch.where(combined_mask == 0, torch.tensor(1.0), torch.tensor(0.0)) + + adv_steps = int(steps / denoise) + start_at_step = adv_steps - steps + + region_len = len(regional_prompts) + total = steps*region_len + + leftover_noise = False + if base_only_steps > 0: + if seed_2nd_mode == 'ignore': + leftover_noise = True + + samples = base_sampler.sample_advanced(True, seed, adv_steps, samples, start_at_step, start_at_step + base_only_steps, leftover_noise, recovery_mode="DISABLE") + + if seed_2nd_mode == "seed+seed_2nd": + seed += seed_2nd + if seed > 1125899906842624: + seed = seed - 1125899906842624 + elif seed_2nd_mode == "seed-seed_2nd": + seed -= seed_2nd + if seed < 0: + seed += 1125899906842624 + elif seed_2nd_mode != 'ignore': + seed = seed_2nd + + new_latent_image = samples.copy() + base_latent_image = None + + if not leftover_noise: + add_noise = True + else: + add_noise = False + + for i in range(start_at_step+base_only_steps, adv_steps): + core.update_node_status(unique_id, f"{i}/{steps} steps | ", ((i-start_at_step)*region_len)/total) + + new_latent_image['noise_mask'] = inv_mask + new_latent_image = base_sampler.sample_advanced(add_noise, seed, adv_steps, new_latent_image, i, i + 1, True, + recovery_mode=additional_mode, recovery_sampler=additional_sampler, recovery_sigma_ratio=additional_sigma_ratio) + + if restore_latent: + if 'noise_mask' in new_latent_image: + del new_latent_image['noise_mask'] + base_latent_image = new_latent_image.copy() + + j = 1 + for regional_prompt in regional_prompts: + if restore_latent: + new_latent_image = base_latent_image.copy() + + core.update_node_status(unique_id, f"{i}/{steps} steps | {j}/{region_len}", ((i-start_at_step)*region_len + j)/total) + + region_mask = regional_prompt.get_mask_erosion(overlap_factor).squeeze(0).squeeze(0) + + new_latent_image['noise_mask'] = region_mask + new_latent_image = regional_prompt.sampler.sample_advanced(False, seed, adv_steps, new_latent_image, i, i + 1, True, + recovery_mode=additional_mode, recovery_sampler=additional_sampler, recovery_sigma_ratio=additional_sigma_ratio) + + if restore_latent: + del new_latent_image['noise_mask'] + base_latent_image = latent_compositor.composite(base_latent_image, new_latent_image, 0, 0, False, region_mask)[0] + new_latent_image = base_latent_image + + j += 1 + + add_noise = False + + # finalize + core.update_node_status(unique_id, f"finalize") + if base_latent_image is not None: + new_latent_image = base_latent_image + else: + base_latent_image = new_latent_image + + new_latent_image['noise_mask'] = inv_mask + new_latent_image = base_sampler.sample_advanced(False, seed, adv_steps, new_latent_image, adv_steps, adv_steps+1, False, + recovery_mode=additional_mode, recovery_sampler=additional_sampler, recovery_sigma_ratio=additional_sigma_ratio) + + core.update_node_status(unique_id, f"{steps}/{steps} steps", total) + core.update_node_status(unique_id, "", None) + + if restore_latent: + new_latent_image = base_latent_image + + if 'noise_mask' in new_latent_image: + del new_latent_image['noise_mask'] + + return (new_latent_image, ) + + +class RegionalSamplerAdvanced: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "add_noise": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "noise_seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "start_at_step": ("INT", {"default": 0, "min": 0, "max": 10000}), + "end_at_step": ("INT", {"default": 10000, "min": 0, "max": 10000}), + "overlap_factor": ("INT", {"default": 10, "min": 0, "max": 10000}), + "restore_latent": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}), + "return_with_leftover_noise": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "latent_image": ("LATENT", ), + "base_sampler": ("KSAMPLER_ADVANCED", ), + "regional_prompts": ("REGIONAL_PROMPTS", ), + "additional_mode": (["DISABLE", "ratio additional", "ratio between"], {"default": "ratio between"}), + "additional_sampler": (["AUTO", "euler", "heun", "heunpp2", "dpm_2", "dpm_fast", "dpmpp_2m", "ddpm"],), + "additional_sigma_ratio": ("FLOAT", {"default": 0.3, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Regional" + + def doit(self, add_noise, noise_seed, steps, start_at_step, end_at_step, overlap_factor, restore_latent, return_with_leftover_noise, latent_image, base_sampler, regional_prompts, + additional_mode, additional_sampler, additional_sigma_ratio, unique_id): + + if restore_latent: + latent_compositor = nodes.NODE_CLASS_MAPPINGS['LatentCompositeMasked']() + else: + latent_compositor = None + + masks = [regional_prompt.mask.numpy() for regional_prompt in regional_prompts] + masks = [np.ceil(mask).astype(np.int32) for mask in masks] + combined_mask = torch.from_numpy(np.bitwise_or.reduce(masks)) + + inv_mask = torch.where(combined_mask == 0, torch.tensor(1.0), torch.tensor(0.0)) + + region_len = len(regional_prompts) + end_at_step = min(steps, end_at_step) + total = (end_at_step - start_at_step) * region_len + + new_latent_image = latent_image.copy() + base_latent_image = None + region_masks = {} + + for i in range(start_at_step, end_at_step-1): + core.update_node_status(unique_id, f"{start_at_step+i}/{end_at_step} steps | ", ((i-start_at_step)*region_len)/total) + + cur_add_noise = True if i == start_at_step and add_noise else False + + new_latent_image['noise_mask'] = inv_mask + new_latent_image = base_sampler.sample_advanced(cur_add_noise, noise_seed, steps, new_latent_image, i, i + 1, True, + recovery_mode=additional_mode, recovery_sampler=additional_sampler, recovery_sigma_ratio=additional_sigma_ratio) + + if restore_latent: + del new_latent_image['noise_mask'] + base_latent_image = new_latent_image.copy() + + j = 1 + for regional_prompt in regional_prompts: + if restore_latent: + new_latent_image = base_latent_image.copy() + + core.update_node_status(unique_id, f"{start_at_step+i}/{end_at_step} steps | {j}/{region_len}", ((i-start_at_step)*region_len + j)/total) + + if j not in region_masks: + region_mask = regional_prompt.get_mask_erosion(overlap_factor).squeeze(0).squeeze(0) + region_masks[j] = region_mask + else: + region_mask = region_masks[j] + + new_latent_image['noise_mask'] = region_mask + new_latent_image = regional_prompt.sampler.sample_advanced(False, noise_seed, steps, new_latent_image, i, i + 1, True, + recovery_mode=additional_mode, recovery_sampler=additional_sampler, recovery_sigma_ratio=additional_sigma_ratio) + + if restore_latent: + del new_latent_image['noise_mask'] + base_latent_image = latent_compositor.composite(base_latent_image, new_latent_image, 0, 0, False, region_mask)[0] + new_latent_image = base_latent_image + + j += 1 + + # finalize + core.update_node_status(unique_id, f"finalize") + if base_latent_image is not None: + new_latent_image = base_latent_image + else: + base_latent_image = new_latent_image + + new_latent_image['noise_mask'] = inv_mask + new_latent_image = base_sampler.sample_advanced(False, noise_seed, steps, new_latent_image, end_at_step-1, end_at_step, return_with_leftover_noise, + recovery_mode=additional_mode, recovery_sampler=additional_sampler, recovery_sigma_ratio=additional_sigma_ratio) + + core.update_node_status(unique_id, f"{end_at_step}/{end_at_step} steps", total) + core.update_node_status(unique_id, "", None) + + if restore_latent: + new_latent_image = base_latent_image + + if 'noise_mask' in new_latent_image: + del new_latent_image['noise_mask'] + + return (new_latent_image, ) + + +class KSamplerBasicPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": + {"basic_pipe": ("BASIC_PIPE",), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "latent_image": ("LATENT", ), + "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), + } + } + + RETURN_TYPES = ("BASIC_PIPE", "LATENT", "VAE") + FUNCTION = "sample" + + CATEGORY = "sampling" + + def sample(self, basic_pipe, seed, steps, cfg, sampler_name, scheduler, latent_image, denoise=1.0): + model, clip, vae, positive, negative = basic_pipe + latent = nodes.KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise)[0] + return basic_pipe, latent, vae + + +class KSamplerAdvancedBasicPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": + {"basic_pipe": ("BASIC_PIPE",), + "add_noise": ("BOOLEAN", {"default": True, "label_on": "enable", "label_off": "disable"}), + "noise_seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "latent_image": ("LATENT", ), + "start_at_step": ("INT", {"default": 0, "min": 0, "max": 10000}), + "end_at_step": ("INT", {"default": 10000, "min": 0, "max": 10000}), + "return_with_leftover_noise": ("BOOLEAN", {"default": False, "label_on": "enable", "label_off": "disable"}), + } + } + + RETURN_TYPES = ("BASIC_PIPE", "LATENT", "VAE") + FUNCTION = "sample" + + CATEGORY = "sampling" + + def sample(self, basic_pipe, add_noise, noise_seed, steps, cfg, sampler_name, scheduler, latent_image, start_at_step, end_at_step, return_with_leftover_noise, denoise=1.0): + model, clip, vae, positive, negative = basic_pipe + + if add_noise: + add_noise = "enable" + else: + add_noise = "disable" + + if return_with_leftover_noise: + return_with_leftover_noise = "enable" + else: + return_with_leftover_noise = "disable" + + latent = nodes.KSamplerAdvanced().sample(model, add_noise, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, start_at_step, end_at_step, return_with_leftover_noise, denoise)[0] + return basic_pipe, latent, vae + diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/util_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/util_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..1484d520bf1e6684df163e010c10e16526c10c5e --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/util_nodes.py @@ -0,0 +1,487 @@ +from impact.utils import any_typ, ByPassTypeTuple, make_3d_mask +import comfy_extras.nodes_mask +from nodes import MAX_RESOLUTION +import torch +import comfy +import sys +import nodes + + +class GeneralSwitch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "select": ("INT", {"default": 1, "min": 1, "max": 999999, "step": 1}), + "sel_mode": ("BOOLEAN", {"default": True, "label_on": "select_on_prompt", "label_off": "select_on_execution", "forceInput": False}), + }, + "optional": { + "input1": (any_typ,), + }, + "hidden": {"unique_id": "UNIQUE_ID", "extra_pnginfo": "EXTRA_PNGINFO"} + } + + RETURN_TYPES = (any_typ, "STRING", "INT") + RETURN_NAMES = ("selected_value", "selected_label", "selected_index") + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, *args, **kwargs): + selected_index = int(kwargs['select']) + input_name = f"input{selected_index}" + + selected_label = input_name + node_id = kwargs['unique_id'] + nodelist = kwargs['extra_pnginfo']['workflow']['nodes'] + for node in nodelist: + if str(node['id']) == node_id: + inputs = node['inputs'] + + for slot in inputs: + if slot['name'] == input_name and 'label' in slot: + selected_label = slot['label'] + + break + + if input_name in kwargs: + return (kwargs[input_name], selected_label, selected_index) + else: + print(f"ImpactSwitch: invalid select index (ignored)") + return (None, "", selected_index) + + +class LatentSwitch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "select": ("INT", {"default": 1, "min": 1, "max": 99999, "step": 1}), + "latent1": ("LATENT",), + }, + } + + RETURN_TYPES = ("LATENT", ) + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, *args, **kwargs): + input_name = f"latent{int(kwargs['select'])}" + + if input_name in kwargs: + return (kwargs[input_name],) + else: + print(f"LatentSwitch: invalid select index ('latent1' is selected)") + return (kwargs['latent1'],) + + +class ImageMaskSwitch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "select": ("INT", {"default": 1, "min": 1, "max": 4, "step": 1}), + "images1": ("IMAGE",), + }, + + "optional": { + "mask1_opt": ("MASK",), + "images2_opt": ("IMAGE",), + "mask2_opt": ("MASK",), + "images3_opt": ("IMAGE",), + "mask3_opt": ("MASK",), + "images4_opt": ("IMAGE",), + "mask4_opt": ("MASK",), + }, + } + + RETURN_TYPES = ("IMAGE", "MASK",) + + OUTPUT_NODE = True + + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, select, images1, mask1_opt=None, images2_opt=None, mask2_opt=None, images3_opt=None, mask3_opt=None, + images4_opt=None, mask4_opt=None): + if select == 1: + return images1, mask1_opt, + elif select == 2: + return images2_opt, mask2_opt, + elif select == 3: + return images3_opt, mask3_opt, + else: + return images4_opt, mask4_opt, + + +class GeneralInversedSwitch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "select": ("INT", {"default": 1, "min": 1, "max": 999999, "step": 1}), + "input": (any_typ,), + }, + "hidden": {"unique_id": "UNIQUE_ID"}, + } + + RETURN_TYPES = ByPassTypeTuple((any_typ, )) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, select, input, unique_id): + res = [] + + for i in range(0, select): + if select == i+1: + res.append(input) + else: + res.append(None) + + return res + + +class RemoveNoiseMask: + @classmethod + def INPUT_TYPES(s): + return {"required": {"samples": ("LATENT",)}} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, samples): + res = {key: value for key, value in samples.items() if key != 'noise_mask'} + return (res, ) + + +class ImagePasteMasked: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "destination": ("IMAGE",), + "source": ("IMAGE",), + "x": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 1}), + "y": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 1}), + "resize_source": ("BOOLEAN", {"default": False}), + }, + "optional": { + "mask": ("MASK",), + } + } + RETURN_TYPES = ("IMAGE",) + FUNCTION = "composite" + + CATEGORY = "image" + + def composite(self, destination, source, x, y, resize_source, mask = None): + destination = destination.clone().movedim(-1, 1) + output = comfy_extras.nodes_mask.composite(destination, source.movedim(-1, 1), x, y, mask, 1, resize_source).movedim(1, -1) + return (output,) + + +from impact.utils import any_typ + +class ImpactLogger: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "data": (any_typ, ""), + }, + "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"}, + } + + CATEGORY = "ImpactPack/Debug" + + OUTPUT_NODE = True + + RETURN_TYPES = () + FUNCTION = "doit" + + def doit(self, data, prompt, extra_pnginfo): + shape = "" + if hasattr(data, "shape"): + shape = f"{data.shape} / " + + print(f"[IMPACT LOGGER]: {shape}{data}") + + print(f" PROMPT: {prompt}") + + # for x in prompt: + # if 'inputs' in x and 'populated_text' in x['inputs']: + # print(f"PROMP: {x['10']['inputs']['populated_text']}") + # + # for x in extra_pnginfo['workflow']['nodes']: + # if x['type'] == 'ImpactWildcardProcessor': + # print(f" WV : {x['widgets_values'][1]}\n") + + return {} + + +class ImpactDummyInput: + @classmethod + def INPUT_TYPES(s): + return {"required": {}} + + CATEGORY = "ImpactPack/Debug" + + RETURN_TYPES = (any_typ,) + FUNCTION = "doit" + + def doit(self): + return ("DUMMY",) + + +class MasksToMaskList: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "masks": ("MASK", ), + } + } + + RETURN_TYPES = ("MASK", ) + OUTPUT_IS_LIST = (True, ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, masks): + if masks is None: + empty_mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu") + return ([empty_mask], ) + + res = [] + + for mask in masks: + res.append(mask) + + print(f"mask len: {len(res)}") + + res = [make_3d_mask(x) for x in res] + + return (res, ) + + +class MaskListToMaskBatch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "mask": ("MASK", ), + } + } + + INPUT_IS_LIST = True + + RETURN_TYPES = ("MASK", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, mask): + if len(mask) == 1: + mask = make_3d_mask(mask[0]) + return (mask,) + elif len(mask) > 1: + mask1 = make_3d_mask(mask[0]) + + for mask2 in mask[1:]: + mask2 = make_3d_mask(mask2) + if mask1.shape[1:] != mask2.shape[1:]: + mask2 = comfy.utils.common_upscale(mask2.movedim(-1, 1), mask1.shape[2], mask1.shape[1], "lanczos", "center").movedim(1, -1) + mask1 = torch.cat((mask1, mask2), dim=0) + + return (mask1,) + else: + empty_mask = torch.zeros((1, 64, 64), dtype=torch.float32, device="cpu").unsqueeze(0) + return (empty_mask,) + + +class ImageListToImageBatch: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "images": ("IMAGE", ), + } + } + + INPUT_IS_LIST = True + + RETURN_TYPES = ("IMAGE", ) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Operation" + + def doit(self, images): + if len(images) <= 1: + return (images,) + else: + image1 = images[0] + for image2 in images[1:]: + if image1.shape[1:] != image2.shape[1:]: + image2 = comfy.utils.common_upscale(image2.movedim(-1, 1), image1.shape[2], image1.shape[1], "lanczos", "center").movedim(1, -1) + image1 = torch.cat((image1, image2), dim=0) + return (image1,) + + +class ImageBatchToImageList: + @classmethod + def INPUT_TYPES(s): + return {"required": {"image": ("IMAGE",), }} + + RETURN_TYPES = ("IMAGE",) + OUTPUT_IS_LIST = (True,) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, image): + images = [image[i:i + 1, ...] for i in range(image.shape[0])] + return (images, ) + + +class MakeImageList: + @classmethod + def INPUT_TYPES(s): + return {"required": {"image1": ("IMAGE",), }} + + RETURN_TYPES = ("IMAGE",) + OUTPUT_IS_LIST = (True,) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, **kwargs): + images = [] + + for k, v in kwargs.items(): + images.append(v) + + return (images, ) + + +class MakeImageBatch: + @classmethod + def INPUT_TYPES(s): + return {"required": {"image1": ("IMAGE",), }} + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, **kwargs): + image1 = kwargs['image1'] + del kwargs['image1'] + images = [value for value in kwargs.values()] + + if len(images) == 0: + return (image1,) + else: + for image2 in images: + if image1.shape[1:] != image2.shape[1:]: + image2 = comfy.utils.common_upscale(image2.movedim(-1, 1), image1.shape[2], image1.shape[1], "lanczos", "center").movedim(1, -1) + image1 = torch.cat((image1, image2), dim=0) + return (image1,) + + +class ReencodeLatent: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "samples": ("LATENT", ), + "tile_mode": (["None", "Both", "Decode(input) only", "Encode(output) only"],), + "input_vae": ("VAE", ), + "output_vae": ("VAE", ), + "tile_size": ("INT", {"default": 512, "min": 320, "max": 4096, "step": 64}), + }, + } + + CATEGORY = "ImpactPack/Util" + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "doit" + + def doit(self, samples, tile_mode, input_vae, output_vae, tile_size=512): + if tile_mode in ["Both", "Decode(input) only"]: + pixels = nodes.VAEDecodeTiled().decode(input_vae, samples, tile_size)[0] + else: + pixels = nodes.VAEDecode().decode(input_vae, samples)[0] + + if tile_mode in ["Both", "Encode(output) only"]: + return nodes.VAEEncodeTiled().encode(output_vae, pixels, tile_size) + else: + return nodes.VAEEncode().encode(output_vae, pixels) + + +class ReencodeLatentPipe: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "samples": ("LATENT", ), + "tile_mode": (["None", "Both", "Decode(input) only", "Encode(output) only"],), + "input_basic_pipe": ("BASIC_PIPE", ), + "output_basic_pipe": ("BASIC_PIPE", ), + }, + } + + CATEGORY = "ImpactPack/Util" + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "doit" + + def doit(self, samples, tile_mode, input_basic_pipe, output_basic_pipe): + _, _, input_vae, _, _ = input_basic_pipe + _, _, output_vae, _, _ = output_basic_pipe + return ReencodeLatent().doit(samples, tile_mode, input_vae, output_vae) + + +class StringSelector: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "strings": ("STRING", {"multiline": True}), + "multiline": ("BOOLEAN", {"default": False, "label_on": "enabled", "label_off": "disabled"}), + "select": ("INT", {"min": 0, "max": sys.maxsize, "step": 1, "default": 0}), + }} + + RETURN_TYPES = ("STRING",) + FUNCTION = "doit" + + CATEGORY = "ImpactPack/Util" + + def doit(self, strings, multiline, select): + lines = strings.split('\n') + + if multiline: + result = [] + current_string = "" + + for line in lines: + if line.startswith("#"): + if current_string: + result.append(current_string.strip()) + current_string = "" + current_string += line + "\n" + + if current_string: + result.append(current_string.strip()) + + if len(result) == 0: + selected = strings + else: + selected = result[select % len(result)] + + if selected.startswith('#'): + selected = selected[1:] + else: + if len(lines) == 0: + selected = strings + else: + selected = lines[select % len(lines)] + + return (selected, ) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/utils.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..9802214fd81112fea0838a4da57ad0c29b42609c --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/utils.py @@ -0,0 +1,625 @@ +import torch +import torchvision +import cv2 +import numpy as np +import folder_paths +import nodes +from . import config +from PIL import Image, ImageFilter +from scipy.ndimage import zoom +import comfy + + +class TensorBatchBuilder: + def __init__(self): + self.tensor = None + + def concat(self, new_tensor): + if self.tensor is None: + self.tensor = new_tensor + else: + self.tensor = torch.concat((self.tensor, new_tensor), dim=0) + + +def tensor_convert_rgba(image, prefer_copy=True): + """Assumes NHWC format tensor with 1, 3 or 4 channels.""" + _tensor_check_image(image) + n_channel = image.shape[-1] + if n_channel == 4: + return image + + if n_channel == 3: + alpha = torch.ones((*image.shape[:-1], 1)) + return torch.cat((image, alpha), axis=-1) + + if n_channel == 1: + if prefer_copy: + image = image.repeat(1, -1, -1, 4) + else: + image = image.expand(1, -1, -1, 3) + return image + + # NOTE: Similar error message as in PIL, for easier googling :P + raise ValueError(f"illegal conversion (channels: {n_channel} -> 4)") + + +def tensor_convert_rgb(image, prefer_copy=True): + """Assumes NHWC format tensor with 1, 3 or 4 channels.""" + _tensor_check_image(image) + n_channel = image.shape[-1] + if n_channel == 3: + return image + + if n_channel == 4: + image = image[..., :3] + if prefer_copy: + image = image.copy() + return image + + if n_channel == 1: + if prefer_copy: + image = image.repeat(1, -1, -1, 4) + else: + image = image.expand(1, -1, -1, 3) + return image + + # NOTE: Same error message as in PIL, for easier googling :P + raise ValueError(f"illegal conversion (channels: {n_channel} -> 3)") + + +def general_tensor_resize(image, w: int, h: int): + _tensor_check_image(image) + image = image.permute(0, 3, 1, 2) + image = torch.nn.functional.interpolate(image, size=(h, w), mode="bilinear") + image = image.permute(0, 2, 3, 1) + return image + + +# TODO: Sadly, we need LANCZOS +LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) +def tensor_resize(image, w: int, h: int): + _tensor_check_image(image) + if image.shape[3] >= 3: + scaled_images = TensorBatchBuilder() + for single_image in image: + single_image = single_image.unsqueeze(0) + single_pil = tensor2pil(single_image) + scaled_pil = single_pil.resize((w, h), resample=LANCZOS) + + single_image = pil2tensor(scaled_pil) + scaled_images.concat(single_image) + + return scaled_images.tensor + else: + return general_tensor_resize(image, w, h) + + +def tensor_get_size(image): + """Mimicking `PIL.Image.size`""" + _tensor_check_image(image) + _, h, w, _ = image.shape + return (w, h) + + +def tensor2pil(image): + _tensor_check_image(image) + return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(0), 0, 255).astype(np.uint8)) + + +def pil2tensor(image): + return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) + + +def numpy2pil(image): + return Image.fromarray(np.clip(255. * image.squeeze(0), 0, 255).astype(np.uint8)) + + +def to_pil(image): + if isinstance(image, Image.Image): + return image + if isinstance(image, torch.Tensor): + return tensor2pil(image) + if isinstance(image, np.ndarray): + return numpy2pil(image) + raise ValueError(f"Cannot convert {type(image)} to PIL.Image") + + +def to_tensor(image): + if isinstance(image, Image.Image): + return torch.from_numpy(np.array(image)) + if isinstance(image, torch.Tensor): + return image + if isinstance(image, np.ndarray): + return torch.from_numpy(image) + raise ValueError(f"Cannot convert {type(image)} to torch.Tensor") + + +def to_numpy(image): + if isinstance(image, Image.Image): + return np.array(image) + if isinstance(image, torch.Tensor): + return image.numpy() + if isinstance(image, np.ndarray): + return image + raise ValueError(f"Cannot convert {type(image)} to numpy.ndarray") + + + +def tensor_putalpha(image, mask): + _tensor_check_image(image) + _tensor_check_mask(mask) + image[..., -1] = mask[..., 0] + + +def _tensor_check_image(image): + if image.ndim != 4: + raise ValueError(f"Expected NHWC tensor, but found {image.ndim} dimensions") + if image.shape[-1] not in (1, 3, 4): + raise ValueError(f"Expected 1, 3 or 4 channels for image, but found {image.shape[-1]} channels") + return + + +def _tensor_check_mask(mask): + if mask.ndim != 4: + raise ValueError(f"Expected NHWC tensor, but found {mask.ndim} dimensions") + if mask.shape[-1] != 1: + raise ValueError(f"Expected 1 channel for mask, but found {mask.shape[-1]} channels") + return + + +def tensor_crop(image, crop_region): + _tensor_check_image(image) + return crop_ndarray4(image, crop_region) + + +def tensor2numpy(image): + _tensor_check_image(image) + return image.numpy() + + +def tensor_paste(image1, image2, left_top, mask): + """Mask and image2 has to be the same size""" + _tensor_check_image(image1) + _tensor_check_image(image2) + _tensor_check_mask(mask) + if image2.shape[1:3] != mask.shape[1:3]: + raise ValueError(f"Inconsistent size: Image ({image2.shape[1:3]}) != Mask ({mask.shape[1:3]})") + + x, y = left_top + _, h1, w1, _ = image1.shape + _, h2, w2, _ = image2.shape + + # calculate image patch size + w = min(w1, x + w2) - x + h = min(h1, y + h2) - y + + # If the patch is out of bound, nothing to do! + if w <= 0 or h <= 0: + return + + mask = mask[:, :h, :w, :] + image1[:, y:y+h, x:x+w, :] = ( + (1 - mask) * image1[:, y:y+h, x:x+w, :] + + mask * image2[:, :h, :w, :] + ) + return + + +def center_of_bbox(bbox): + w, h = bbox[2] - bbox[0], bbox[3] - bbox[1] + return bbox[0] + w/2, bbox[1] + h/2 + + +def combine_masks(masks): + if len(masks) == 0: + return None + else: + initial_cv2_mask = np.array(masks[0][1]) + combined_cv2_mask = initial_cv2_mask + + for i in range(1, len(masks)): + cv2_mask = np.array(masks[i][1]) + + if combined_cv2_mask.shape == cv2_mask.shape: + combined_cv2_mask = cv2.bitwise_or(combined_cv2_mask, cv2_mask) + else: + # do nothing - incompatible mask + pass + + mask = torch.from_numpy(combined_cv2_mask) + return mask + + +def combine_masks2(masks): + if len(masks) == 0: + return None + else: + initial_cv2_mask = np.array(masks[0]).astype(np.uint8) + combined_cv2_mask = initial_cv2_mask + + for i in range(1, len(masks)): + cv2_mask = np.array(masks[i]).astype(np.uint8) + + if combined_cv2_mask.shape == cv2_mask.shape: + combined_cv2_mask = cv2.bitwise_or(combined_cv2_mask, cv2_mask) + else: + # do nothing - incompatible mask + pass + + mask = torch.from_numpy(combined_cv2_mask) + return mask + + +def bitwise_and_masks(mask1, mask2): + mask1 = mask1.cpu() + mask2 = mask2.cpu() + cv2_mask1 = np.array(mask1) + cv2_mask2 = np.array(mask2) + + if cv2_mask1.shape == cv2_mask2.shape: + cv2_mask = cv2.bitwise_and(cv2_mask1, cv2_mask2) + return torch.from_numpy(cv2_mask) + else: + # do nothing - incompatible mask shape: mostly empty mask + return mask1 + + +def to_binary_mask(mask, threshold=0): + mask = make_3d_mask(mask) + + mask = mask.clone().cpu() + mask[mask > threshold] = 1. + mask[mask <= threshold] = 0. + return mask + + +def use_gpu_opencv(): + return not config.get_config()['disable_gpu_opencv'] + + +def dilate_mask(mask, dilation_factor, iter=1): + if dilation_factor == 0: + return make_2d_mask(mask) + + mask = make_2d_mask(mask) + + kernel = np.ones((abs(dilation_factor), abs(dilation_factor)), np.uint8) + + if use_gpu_opencv(): + mask = cv2.UMat(mask) + kernel = cv2.UMat(kernel) + + if dilation_factor > 0: + result = cv2.dilate(mask, kernel, iter) + else: + result = cv2.erode(mask, kernel, iter) + + if use_gpu_opencv(): + return result.get() + else: + return result + + +def dilate_masks(segmasks, dilation_factor, iter=1): + if dilation_factor == 0: + return segmasks + + dilated_masks = [] + kernel = np.ones((abs(dilation_factor), abs(dilation_factor)), np.uint8) + + if use_gpu_opencv(): + kernel = cv2.UMat(kernel) + + for i in range(len(segmasks)): + cv2_mask = segmasks[i][1] + + if use_gpu_opencv(): + cv2_mask = cv2.UMat(cv2_mask) + + if dilation_factor > 0: + dilated_mask = cv2.dilate(cv2_mask, kernel, iter) + else: + dilated_mask = cv2.erode(cv2_mask, kernel, iter) + + if use_gpu_opencv(): + dilated_mask = dilated_mask.get() + + item = (segmasks[i][0], dilated_mask, segmasks[i][2]) + dilated_masks.append(item) + + return dilated_masks + +import torch.nn.functional as F +def feather_mask(mask, thickness): + mask = mask.permute(0, 3, 1, 2) + + # Gaussian kernel for blurring + kernel_size = 2 * int(thickness) + 1 + sigma = thickness / 3 # Adjust the sigma value as needed + blur_kernel = _gaussian_kernel(kernel_size, sigma).to(mask.device, mask.dtype) + + # Apply blur to the mask + blurred_mask = F.conv2d(mask, blur_kernel.unsqueeze(0).unsqueeze(0), padding=thickness) + + blurred_mask = blurred_mask.permute(0, 2, 3, 1) + + return blurred_mask + +def _gaussian_kernel(kernel_size, sigma): + # Generate a 1D Gaussian kernel + kernel = torch.exp(-(torch.arange(kernel_size) - kernel_size // 2)**2 / (2 * sigma**2)) + return kernel / kernel.sum() + + +def tensor_gaussian_blur_mask(mask, kernel_size, sigma=10.0): + """Return NHWC torch.Tenser from ndim == 2 or 4 `np.ndarray` or `torch.Tensor`""" + if isinstance(mask, np.ndarray): + mask = torch.from_numpy(mask) + + if mask.ndim == 2: + mask = mask[None, ..., None] + elif mask.ndim == 3: + mask = mask[..., None] + + _tensor_check_mask(mask) + + if kernel_size <= 0: + return mask + + kernel_size = kernel_size*2+1 + + shortest = min(mask.shape[1], mask.shape[2]) + if shortest <= kernel_size: + kernel_size = int(shortest/2) + if kernel_size % 2 == 0: + kernel_size += 1 + if kernel_size < 3: + return mask # skip feathering + + prev_device = mask.device + device = comfy.model_management.get_torch_device() + mask.to(device) + + # apply gaussian blur + mask = mask[:, None, ..., 0] + blurred_mask = torchvision.transforms.GaussianBlur(kernel_size=kernel_size, sigma=sigma)(mask) + blurred_mask = blurred_mask[:, 0, ..., None] + + blurred_mask.to(prev_device) + + return blurred_mask + + +def subtract_masks(mask1, mask2): + mask1 = mask1.cpu() + mask2 = mask2.cpu() + cv2_mask1 = np.array(mask1) * 255 + cv2_mask2 = np.array(mask2) * 255 + + if cv2_mask1.shape == cv2_mask2.shape: + cv2_mask = cv2.subtract(cv2_mask1, cv2_mask2) + return torch.clamp(torch.from_numpy(cv2_mask) / 255.0, min=0, max=1) + else: + # do nothing - incompatible mask shape: mostly empty mask + return mask1 + + +def add_masks(mask1, mask2): + mask1 = mask1.cpu() + mask2 = mask2.cpu() + cv2_mask1 = np.array(mask1) * 255 + cv2_mask2 = np.array(mask2) * 255 + + if cv2_mask1.shape == cv2_mask2.shape: + cv2_mask = cv2.add(cv2_mask1, cv2_mask2) + return torch.clamp(torch.from_numpy(cv2_mask) / 255.0, min=0, max=1) + else: + # do nothing - incompatible mask shape: mostly empty mask + return mask1 + + +def normalize_region(limit, startp, size): + if startp < 0: + new_endp = min(limit, size) + new_startp = 0 + elif startp + size > limit: + new_startp = max(0, limit - size) + new_endp = limit + else: + new_startp = startp + new_endp = min(limit, startp+size) + + return int(new_startp), int(new_endp) + + +def make_crop_region(w, h, bbox, crop_factor, crop_min_size=None): + x1 = bbox[0] + y1 = bbox[1] + x2 = bbox[2] + y2 = bbox[3] + + bbox_w = x2 - x1 + bbox_h = y2 - y1 + + crop_w = bbox_w * crop_factor + crop_h = bbox_h * crop_factor + + if crop_min_size is not None: + crop_w = max(crop_min_size, crop_w) + crop_h = max(crop_min_size, crop_h) + + kernel_x = x1 + bbox_w / 2 + kernel_y = y1 + bbox_h / 2 + + new_x1 = int(kernel_x - crop_w / 2) + new_y1 = int(kernel_y - crop_h / 2) + + # make sure position in (w,h) + new_x1, new_x2 = normalize_region(w, new_x1, crop_w) + new_y1, new_y2 = normalize_region(h, new_y1, crop_h) + + return [new_x1, new_y1, new_x2, new_y2] + + +def crop_ndarray4(npimg, crop_region): + x1 = crop_region[0] + y1 = crop_region[1] + x2 = crop_region[2] + y2 = crop_region[3] + + cropped = npimg[:, y1:y2, x1:x2, :] + + return cropped + + +crop_tensor4 = crop_ndarray4 + + +def crop_ndarray2(npimg, crop_region): + x1 = crop_region[0] + y1 = crop_region[1] + x2 = crop_region[2] + y2 = crop_region[3] + + cropped = npimg[y1:y2, x1:x2] + + return cropped + + +def crop_image(image, crop_region): + return crop_tensor4(image, crop_region) + + +def to_latent_image(pixels, vae): + x = pixels.shape[1] + y = pixels.shape[2] + if pixels.shape[1] != x or pixels.shape[2] != y: + pixels = pixels[:, :x, :y, :] + + vae_encode = nodes.VAEEncode() + if hasattr(nodes.VAEEncode, "vae_encode_crop_pixels"): + # backward compatibility + print(f"[Impact Pack] ComfyUI is outdated.") + pixels = nodes.VAEEncode.vae_encode_crop_pixels(pixels) + t = vae.encode(pixels[:, :, :, :3]) + return {"samples": t} + + return vae_encode.encode(vae, pixels)[0] + + +def empty_pil_tensor(w=64, h=64): + return torch.zeros((1, h, w, 3), dtype=torch.float32) + + +def make_2d_mask(mask): + if len(mask.shape) == 4: + return mask.squeeze(0).squeeze(0) + + elif len(mask.shape) == 3: + return mask.squeeze(0) + + return mask + + +def make_3d_mask(mask): + if len(mask.shape) == 4: + return mask.squeeze(0) + + elif len(mask.shape) == 2: + return mask.unsqueeze(0) + + return mask + + +def is_same_device(a, b): + a_device = torch.device(a) if isinstance(a, str) else a + b_device = torch.device(b) if isinstance(b, str) else b + return a_device.type == b_device.type and a_device.index == b_device.index + + +def collect_non_reroute_nodes(node_map, links, res, node_id): + if node_map[node_id]['type'] != 'Reroute' and node_map[node_id]['type'] != 'Reroute (rgthree)': + res.append(node_id) + else: + for link in node_map[node_id]['outputs'][0]['links']: + next_node_id = str(links[link][2]) + collect_non_reroute_nodes(node_map, links, res, next_node_id) + + +from torchvision.transforms.functional import to_pil_image + + +def resize_mask(mask, size): + resized_mask = torch.nn.functional.interpolate(mask.unsqueeze(0), size=size, mode='bilinear', align_corners=False) + return resized_mask.squeeze(0) + + +def apply_mask_alpha_to_pil(decoded_pil, mask): + decoded_rgba = decoded_pil.convert('RGBA') + mask_pil = to_pil_image(mask) + decoded_rgba.putalpha(mask_pil) + + return decoded_rgba + + +def try_install_custom_node(custom_node_url, msg): + try: + import cm_global + cm_global.try_call(api='cm.try-install-custom-node', + sender="Impact Pack", custom_node_url=custom_node_url, msg=msg) + except Exception: + print(msg) + print(f"[Impact Pack] ComfyUI-Manager is outdated. The custom node installation feature is not available.") + + +# author: Trung0246 ---> +class TautologyStr(str): + def __ne__(self, other): + return False + + +class ByPassTypeTuple(tuple): + def __getitem__(self, index): + if index > 0: + index = 0 + item = super().__getitem__(index) + if isinstance(item, str): + return TautologyStr(item) + return item + + +class NonListIterable: + def __init__(self, data): + self.data = data + + def __getitem__(self, index): + return self.data[index] + + +def add_folder_path_and_extensions(folder_name, full_folder_paths, extensions): + # Iterate over the list of full folder paths + for full_folder_path in full_folder_paths: + # Use the provided function to add each model folder path + folder_paths.add_model_folder_path(folder_name, full_folder_path) + + # Now handle the extensions. If the folder name already exists, update the extensions + if folder_name in folder_paths.folder_names_and_paths: + # Unpack the current paths and extensions + current_paths, current_extensions = folder_paths.folder_names_and_paths[folder_name] + # Update the extensions set with the new extensions + updated_extensions = current_extensions | extensions + # Reassign the updated tuple back to the dictionary + folder_paths.folder_names_and_paths[folder_name] = (current_paths, updated_extensions) + else: + # If the folder name was not present, add_model_folder_path would have added it with the last path + # Now we just need to update the set of extensions as it would be an empty set + # Also ensure that all paths are included (since add_model_folder_path adds only one path at a time) + folder_paths.folder_names_and_paths[folder_name] = (full_folder_paths, extensions) +# <--- + +# wildcard trick is taken from pythongossss's +class AnyType(str): + def __ne__(self, __value: object) -> bool: + return False + +any_typ = AnyType("*") diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/impact/wildcards.py b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/wildcards.py new file mode 100644 index 0000000000000000000000000000000000000000..7b0c5f81bff2c299f7a4246b65a1bfabcb51d7d7 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/impact/wildcards.py @@ -0,0 +1,452 @@ +import re +import random +import os +import nodes +import folder_paths +import yaml +import numpy as np +import threading +from impact import utils + + +wildcard_lock = threading.Lock() +wildcard_dict = {} + + +def get_wildcard_list(): + with wildcard_lock: + return [f"__{x}__" for x in wildcard_dict.keys()] + + +def get_wildcard_dict(): + global wildcard_dict + with wildcard_lock: + return wildcard_dict + + +def wildcard_normalize(x): + return x.replace("\\", "/").lower() + + +def read_wildcard(k, v): + if isinstance(v, list): + k = wildcard_normalize(k) + wildcard_dict[k] = v + elif isinstance(v, dict): + for k2, v2 in v.items(): + new_key = f"{k}/{k2}" + new_key = wildcard_normalize(new_key) + read_wildcard(new_key, v2) + + +def read_wildcard_dict(wildcard_path): + global wildcard_dict + for root, directories, files in os.walk(wildcard_path, followlinks=True): + for file in files: + if file.endswith('.txt'): + file_path = os.path.join(root, file) + rel_path = os.path.relpath(file_path, wildcard_path) + key = os.path.splitext(rel_path)[0].replace('\\', '/').lower() + + try: + with open(file_path, 'r', encoding="ISO-8859-1") as f: + lines = f.read().splitlines() + wildcard_dict[key] = lines + except UnicodeDecodeError: + with open(file_path, 'r', encoding="UTF-8", errors="ignore") as f: + lines = f.read().splitlines() + wildcard_dict[key] = lines + elif file.endswith('.yaml'): + file_path = os.path.join(root, file) + with open(file_path, 'r') as f: + yaml_data = yaml.load(f, Loader=yaml.FullLoader) + + for k, v in yaml_data.items(): + read_wildcard(k, v) + + return wildcard_dict + + +def process(text, seed=None): + if seed is not None: + random.seed(seed) + random_gen = np.random.default_rng(seed) + + def replace_options(string): + replacements_found = False + + def replace_option(match): + nonlocal replacements_found + options = match.group(1).split('|') + + multi_select_pattern = options[0].split('$$') + select_range = None + select_sep = ' ' + range_pattern = r'(\d+)(-(\d+))?' + range_pattern2 = r'-(\d+)' + + if len(multi_select_pattern) > 1: + r = re.match(range_pattern, options[0]) + + if r is None: + r = re.match(range_pattern2, options[0]) + a = '1' + b = r.group(1).strip() + else: + a = r.group(1).strip() + b = r.group(3) + if b is not None: + b = b.strip() + + if r is not None: + if b is not None and is_numeric_string(a) and is_numeric_string(b): + # PATTERN: num1-num2 + select_range = int(a), int(b) + elif is_numeric_string(a): + # PATTERN: num + x = int(a) + select_range = (x, x) + + if select_range is not None and len(multi_select_pattern) == 2: + # PATTERN: count$$ + options[0] = multi_select_pattern[1] + elif select_range is not None and len(multi_select_pattern) == 3: + # PATTERN: count$$ sep $$ + select_sep = multi_select_pattern[1] + options[0] = multi_select_pattern[2] + + adjusted_probabilities = [] + + total_prob = 0 + + for option in options: + parts = option.split('::', 1) + if len(parts) == 2 and is_numeric_string(parts[0].strip()): + config_value = float(parts[0].strip()) + else: + config_value = 1 # Default value if no configuration is provided + + adjusted_probabilities.append(config_value) + total_prob += config_value + + normalized_probabilities = [prob / total_prob for prob in adjusted_probabilities] + + if select_range is None: + select_count = 1 + else: + select_count = random_gen.integers(low=select_range[0], high=select_range[1]+1, size=1) + + if select_count > len(options): + random_gen.shuffle(options) + selected_items = options + else: + selected_items = random_gen.choice(options, p=normalized_probabilities, size=select_count, replace=False) + + selected_items2 = [re.sub(r'^\s*[0-9.]+::', '', x, 1) for x in selected_items] + replacement = select_sep.join(selected_items2) + if '::' in replacement: + pass + + replacements_found = True + return replacement + + pattern = r'{([^{}]*?)}' + replaced_string = re.sub(pattern, replace_option, string) + + return replaced_string, replacements_found + + def replace_wildcard(string): + local_wildcard_dict = get_wildcard_dict() + pattern = r"__([\w.\-+/*\\]+)__" + matches = re.findall(pattern, string) + + replacements_found = False + + for match in matches: + keyword = match.lower() + keyword = wildcard_normalize(keyword) + if keyword in local_wildcard_dict: + replacement = random_gen.choice(local_wildcard_dict[keyword]) + replacements_found = True + string = string.replace(f"__{match}__", replacement, 1) + elif '*' in keyword: + subpattern = keyword.replace('*', '.*').replace('+','\+') + total_patterns = [] + found = False + for k, v in local_wildcard_dict.items(): + if re.match(subpattern, k) is not None: + total_patterns += v + found = True + + if found: + replacement = random_gen.choice(total_patterns) + replacements_found = True + string = string.replace(f"__{match}__", replacement, 1) + elif '/' not in keyword: + string_fallback = string.replace(f"__{match}__", f"__*/{match}__", 1) + string, replacements_found = replace_wildcard(string_fallback) + + return string, replacements_found + + replace_depth = 100 + stop_unwrap = False + while not stop_unwrap and replace_depth > 1: + replace_depth -= 1 # prevent infinite loop + + # pass1: replace options + pass1, is_replaced1 = replace_options(text) + + while is_replaced1: + pass1, is_replaced1 = replace_options(pass1) + + # pass2: replace wildcards + text, is_replaced2 = replace_wildcard(pass1) + stop_unwrap = not is_replaced1 and not is_replaced2 + + return text + + +def is_numeric_string(input_str): + return re.match(r'^-?\d+(\.\d+)?$', input_str) is not None + + +def safe_float(x): + if is_numeric_string(x): + return float(x) + else: + return 1.0 + + +def extract_lora_values(string): + pattern = r']+)>' + matches = re.findall(pattern, string) + + def touch_lbw(text): + return re.sub(r'LBW=[A-Za-z][A-Za-z0-9_-]*:', r'LBW=', text) + + items = [touch_lbw(match.strip(':')) for match in matches] + + added = set() + result = [] + for item in items: + item = item.split(':') + + lora = None + a = None + b = None + lbw = None + lbw_a = None + lbw_b = None + + if len(item) > 0: + lora = item[0] + + for sub_item in item[1:]: + if is_numeric_string(sub_item): + if a is None: + a = float(sub_item) + elif b is None: + b = float(sub_item) + elif sub_item.startswith("LBW="): + for lbw_item in sub_item[4:].split(';'): + if lbw_item.startswith("A="): + lbw_a = safe_float(lbw_item[2:].strip()) + elif lbw_item.startswith("B="): + lbw_b = safe_float(lbw_item[2:].strip()) + elif lbw_item.strip() != '': + lbw = lbw_item + + if a is None: + a = 1.0 + if b is None: + b = a + + if lora is not None and lora not in added: + result.append((lora, a, b, lbw, lbw_a, lbw_b)) + added.add(lora) + + return result + + +def remove_lora_tags(string): + pattern = r']+>' + result = re.sub(pattern, '', string) + + return result + + +def resolve_lora_name(lora_name_cache, name): + if os.path.exists(name): + return name + else: + if len(lora_name_cache) == 0: + lora_name_cache.extend(folder_paths.get_filename_list("loras")) + + for x in lora_name_cache: + if x.endswith(name): + return x + + +def process_with_loras(wildcard_opt, model, clip, clip_encoder=None): + lora_name_cache = [] + + pass1 = process(wildcard_opt) + loras = extract_lora_values(pass1) + pass2 = remove_lora_tags(pass1) + + for lora_name, model_weight, clip_weight, lbw, lbw_a, lbw_b in loras: + lora_name_ext = lora_name.split('.') + if ('.'+lora_name_ext[-1]) not in folder_paths.supported_pt_extensions: + lora_name = lora_name+".safetensors" + + orig_lora_name = lora_name + lora_name = resolve_lora_name(lora_name_cache, lora_name) + + if lora_name is not None: + path = folder_paths.get_full_path("loras", lora_name) + else: + path = None + + if path is not None: + print(f"LOAD LORA: {lora_name}: {model_weight}, {clip_weight}, LBW={lbw}, A={lbw_a}, B={lbw_b}") + + def default_lora(): + return nodes.LoraLoader().load_lora(model, clip, lora_name, model_weight, clip_weight) + + if lbw is not None: + if 'LoraLoaderBlockWeight //Inspire' not in nodes.NODE_CLASS_MAPPINGS: + utils.try_install_custom_node( + 'https://github.com/ltdrdata/ComfyUI-Inspire-Pack', + "To use 'LBW=' syntax in wildcards, 'Inspire Pack' extension is required.") + + print(f"'LBW(Lora Block Weight)' is given, but the 'Inspire Pack' is not installed. The LBW= attribute is being ignored.") + model, clip = default_lora() + else: + cls = nodes.NODE_CLASS_MAPPINGS['LoraLoaderBlockWeight //Inspire'] + model, clip, _ = cls().doit(model, clip, lora_name, model_weight, clip_weight, False, 0, lbw_a, lbw_b, "", lbw) + else: + model, clip = default_lora() + else: + print(f"LORA NOT FOUND: {orig_lora_name}") + + pass3 = [x.strip() for x in pass2.split("BREAK")] + pass3 = [x for x in pass3 if x != ''] + + if len(pass3) == 0: + pass3 = [''] + + pass3_str = [f'[{x}]' for x in pass3] + print(f"CLIP: {str.join(' + ', pass3_str)}") + + result = None + + for prompt in pass3: + if clip_encoder is None: + cur = nodes.CLIPTextEncode().encode(clip, prompt)[0] + else: + cur = clip_encoder.encode(clip, prompt)[0] + + if result is not None: + result = nodes.ConditioningConcat().concat(result, cur)[0] + else: + result = cur + + return model, clip, result + + +def starts_with_regex(pattern, text): + regex = re.compile(pattern) + return bool(regex.match(text)) + + +def split_to_dict(text): + pattern = r'\[([A-Za-z0-9_. ]+)\]([^\[]+)(?=\[|$)' + matches = re.findall(pattern, text) + + result_dict = {key: value.strip() for key, value in matches} + + return result_dict + + +class WildcardChooser: + def __init__(self, items, randomize_when_exhaust): + self.i = 0 + self.items = items + self.randomize_when_exhaust = randomize_when_exhaust + + def get(self, seg): + if self.i >= len(self.items): + self.i = 0 + if self.randomize_when_exhaust: + random.shuffle(self.items) + + item = self.items[self.i] + self.i += 1 + + return item + + +class WildcardChooserDict: + def __init__(self, items): + self.items = items + + def get(self, seg): + text = "" + if 'ALL' in self.items: + text = self.items['ALL'] + + if seg.label in self.items: + text += self.items[seg.label] + + return text + + +def split_string_with_sep(input_string): + sep_pattern = r'\[SEP(?:\:\w+)?\]' + + substrings = re.split(sep_pattern, input_string) + + result_list = [None] + matches = re.findall(sep_pattern, input_string) + for i, substring in enumerate(substrings): + result_list.append(substring) + if i < len(matches): + if matches[i] == '[SEP]': + result_list.append(None) + elif matches[i] == '[SEP:R]': + result_list.append(random.randint(0, 1125899906842624)) + else: + try: + seed = int(matches[i][5:-1]) + except: + seed = None + result_list.append(seed) + + iterable = iter(result_list) + return list(zip(iterable, iterable)) + + +def process_wildcard_for_segs(wildcard): + if wildcard.startswith('[LAB]'): + raw_items = split_to_dict(wildcard) + + items = {} + for k, v in raw_items.items(): + v = v.strip() + if v != '': + items[k] = v + + return 'LAB', WildcardChooserDict(items) + + elif starts_with_regex(r"\[(ASC|DSC|RND)\]", wildcard): + mode = wildcard[1:4] + items = split_string_with_sep(wildcard[5:]) + + if mode == 'RND': + random.shuffle(items) + return mode, WildcardChooser(items, True) + else: + return mode, WildcardChooser(items, False) + + else: + return None, WildcardChooser([(None, wildcard)], False) diff --git a/custom_nodes/ComfyUI-Impact-Pack/modules/thirdparty/noise_nodes.py b/custom_nodes/ComfyUI-Impact-Pack/modules/thirdparty/noise_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..39354d326f07322699379309ffdcc0fcaa7250e0 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/modules/thirdparty/noise_nodes.py @@ -0,0 +1,80 @@ +# Due to the current lack of maintenance for the `ComfyUI_Noise` extension, +# I have copied the code from the applied PR. +# https://github.com/BlenderNeko/ComfyUI_Noise/pull/13/files + +import comfy +import torch + +class Unsampler: + @classmethod + def INPUT_TYPES(s): + return {"required": + {"model": ("MODEL",), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "end_at_step": ("INT", {"default": 0, "min": 0, "max": 10000}), + "cfg": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS,), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS,), + "normalize": (["disable", "enable"],), + "positive": ("CONDITIONING",), + "negative": ("CONDITIONING",), + "latent_image": ("LATENT",), + }} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "unsampler" + + CATEGORY = "sampling" + + def unsampler(self, model, cfg, sampler_name, steps, end_at_step, scheduler, normalize, positive, negative, + latent_image): + normalize = normalize == "enable" + device = comfy.model_management.get_torch_device() + latent = latent_image + latent_image = latent["samples"] + + end_at_step = min(end_at_step, steps - 1) + end_at_step = steps - end_at_step + + noise = torch.zeros(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, device="cpu") + noise_mask = None + if "noise_mask" in latent: + noise_mask = comfy.sample.prepare_mask(latent["noise_mask"], noise.shape, device) + + real_model = None + real_model = model.model + + noise = noise.to(device) + latent_image = latent_image.to(device) + + positive = comfy.sample.convert_cond(positive) + negative = comfy.sample.convert_cond(negative) + + models, inference_memory = comfy.sample.get_additional_models(positive, negative, model.model_dtype()) + + comfy.model_management.load_models_gpu([model] + models, model.memory_required(noise.shape) + inference_memory) + + sampler = comfy.samplers.KSampler(real_model, steps=steps, device=device, sampler=sampler_name, + scheduler=scheduler, denoise=1.0, model_options=model.model_options) + + sigmas = sigmas = sampler.sigmas.flip(0) + 0.0001 + + pbar = comfy.utils.ProgressBar(steps) + + def callback(step, x0, x, total_steps): + pbar.update_absolute(step + 1, total_steps) + + samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, + force_full_denoise=False, denoise_mask=noise_mask, sigmas=sigmas, start_step=0, + last_step=end_at_step, callback=callback) + if normalize: + # technically doesn't normalize because unsampling is not guaranteed to end at a std given by the schedule + samples -= samples.mean() + samples /= samples.std() + samples = samples.cpu() + + comfy.sample.cleanup_additional_models(models) + + out = latent.copy() + out["samples"] = samples + return (out,) diff --git a/custom_nodes/ComfyUI-Impact-Pack/node_list.json b/custom_nodes/ComfyUI-Impact-Pack/node_list.json new file mode 100644 index 0000000000000000000000000000000000000000..78a0b903a8006b6b536fc31f91e8990442497965 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/node_list.json @@ -0,0 +1,4 @@ +{ + "Segs Mask": "This node is renamed to 'ImpactSegsAndMask'", + "Segs Mask ForEach": "This node is renamed to 'ImpactSegsAndMaskForEach'" +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/notebook/comfyui_colab_impact_pack.ipynb b/custom_nodes/ComfyUI-Impact-Pack/notebook/comfyui_colab_impact_pack.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..6435059cb8fbe9e5e27451fa959965309b7626bf --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/notebook/comfyui_colab_impact_pack.ipynb @@ -0,0 +1,172 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "id": "aaaaaaaaaa" + }, + "source": [ + "Git clone the repo and install the requirements. (ignore the pip errors about protobuf)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "bbbbbbbbbb" + }, + "outputs": [], + "source": [ + "#@title Environment Setup\n", + "\n", + "from pathlib import Path\n", + "\n", + "OPTIONS = {}\n", + "\n", + "WORKSPACE = 'ComfyUI'\n", + "USE_GOOGLE_DRIVE = True #@param {type:\"boolean\"}\n", + "UPDATE_COMFY_UI = True #@param {type:\"boolean\"}\n", + "\n", + "OPTIONS['USE_GOOGLE_DRIVE'] = USE_GOOGLE_DRIVE\n", + "OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI\n", + "\n", + "if OPTIONS['USE_GOOGLE_DRIVE']:\n", + " !echo \"Mounting Google Drive...\"\n", + " %cd /\n", + " \n", + " from google.colab import drive\n", + " drive.mount('/content/drive')\n", + "\n", + " WORKSPACE = \"/content/drive/MyDrive/ComfyUI\"\n", + " \n", + " %cd /content/drive/MyDrive\n", + "\n", + "![ ! -d $WORKSPACE ] && echo \"-= Initial setup ComfyUI (Original)=-\" && git clone https://github.com/comfyanonymous/ComfyUI\n", + "%cd $WORKSPACE\n", + "\n", + "if OPTIONS['UPDATE_COMFY_UI']:\n", + " !echo \"-= Updating ComfyUI =-\"\n", + " !git pull\n", + " !rm \"/content/drive/MyDrive/ComfyUI/custom_nodes/comfyui-impact-pack.py\"\n", + "\n", + "%cd custom_nodes\n", + "!git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack\n", + "%cd $WORKSPACE\n", + "\n", + "!echo -= Install dependencies =-\n", + "!pip -q install xformers -r requirements.txt\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "id": "kkkkkkkkkkkkkk" + }, + "source": [ + "### Run ComfyUI with localtunnel (Recommended Way)\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "jjjjjjjjjjjjj", + "outputId": "83be9411-d939-4813-e6c1-80e75bf8e80d" + }, + "outputs": [], + "source": [ + "!npm install -g localtunnel\n", + "\n", + "import subprocess\n", + "import threading\n", + "import time\n", + "import socket\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " print(\"\\nComfyUI finished loading, trying to launch localtunnel (if it gets stuck here localtunnel is having issues)\")\n", + " p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n", + " for line in p.stdout:\n", + " print(line.decode(), end='')\n", + "\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n", + "\n", + "!python main.py --dont-print-server" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "id": "gggggggggg" + }, + "source": [ + "### Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work)\n", + "\n", + "You should see the ui appear in an iframe. If you get a 403 error, it's your firefox settings or an extension that's messing things up.\n", + "\n", + "If you want to open it in another window use the link.\n", + "\n", + "Note that some UI features like live image previews won't work because the colab iframe blocks websockets." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "hhhhhhhhhh" + }, + "outputs": [], + "source": [ + "import threading\n", + "import time\n", + "import socket\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " from google.colab import output\n", + " output.serve_kernel_port_as_iframe(port, height=1024)\n", + " print(\"to open it in a window you can open this link here:\")\n", + " output.serve_kernel_port_as_window(port)\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n", + "\n", + "!python main.py --dont-print-server" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "provenance": [] + }, + "gpuClass": "standard", + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/custom_nodes/ComfyUI-Impact-Pack/requirements.txt b/custom_nodes/ComfyUI-Impact-Pack/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..56e024b08a37985850540a2bfc253cf431f1755f --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/requirements.txt @@ -0,0 +1,6 @@ +segment-anything +scikit-image +piexif +transformers +opencv-python-headless +GitPython diff --git a/custom_nodes/ComfyUI-Impact-Pack/test/advanced-sampler.json b/custom_nodes/ComfyUI-Impact-Pack/test/advanced-sampler.json new file mode 100644 index 0000000000000000000000000000000000000000..f4bb5149d0277653058d055f55eb5eebd9080db1 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/test/advanced-sampler.json @@ -0,0 +1,976 @@ +{ + "last_node_id": 27, + "last_link_id": 46, + "nodes": [ + { + "id": 11, + "type": "EditBasicPipe", + "pos": [ + 1260, + 590 + ], + "size": { + "0": 267, + "1": 126 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 15 + }, + { + "name": "model", + "type": "MODEL", + "link": null + }, + { + "name": "clip", + "type": "CLIP", + "link": null + }, + { + "name": "vae", + "type": "VAE", + "link": null + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 17 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": null + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 20 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EditBasicPipe" + } + }, + { + "id": 12, + "type": "CLIPTextEncode", + "pos": [ + 420, + 670 + ], + "size": { + "0": 422.84503173828125, + "1": 164.31304931640625 + }, + "flags": {}, + "order": 4, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 16 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 17 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, best quality:1.4, masterpiece, 1girl is sitting in the cafe terrace, (colorful hair:1.1)" + ] + }, + { + "id": 6, + "type": "CLIPTextEncode", + "pos": [ + 415, + 186 + ], + "size": { + "0": 422.84503173828125, + "1": 164.31304931640625 + }, + "flags": {}, + "order": 2, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 3 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 13 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, best quality:1.4, masterpiece, 1girl is sitting in the cafe terrace" + ] + }, + { + "id": 7, + "type": "CLIPTextEncode", + "pos": [ + 413, + 389 + ], + "size": { + "0": 425.27801513671875, + "1": 180.6060791015625 + }, + "flags": {}, + "order": 3, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 5 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 14 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "text, watermark, low quality:1.4, worst quality:1.4" + ] + }, + { + "id": 10, + "type": "ToBasicPipe", + "pos": [ + 952, + 189 + ], + "size": { + "0": 241.79998779296875, + "1": 106 + }, + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 10 + }, + { + "name": "clip", + "type": "CLIP", + "link": 11 + }, + { + "name": "vae", + "type": "VAE", + "link": 12 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 13 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 14 + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 15, + 19, + 33 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ToBasicPipe" + } + }, + { + "id": 22, + "type": "FromBasicPipe", + "pos": [ + 880, + 1040 + ], + "size": { + "0": 241.79998779296875, + "1": 106 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 33 + } + ], + "outputs": [ + { + "name": "model", + "type": "MODEL", + "links": [ + 34 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "clip", + "type": "CLIP", + "links": null, + "shape": 3 + }, + { + "name": "vae", + "type": "VAE", + "links": [ + 40 + ], + "shape": 3, + "slot_index": 2 + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": [ + 35 + ], + "shape": 3, + "slot_index": 3 + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": [ + 36 + ], + "shape": 3, + "slot_index": 4 + } + ], + "properties": { + "Node name for S&R": "FromBasicPipe" + } + }, + { + "id": 24, + "type": "VAEDecode", + "pos": [ + 1938, + 935 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 46 + }, + { + "name": "vae", + "type": "VAE", + "link": 40 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 41 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + }, + { + "id": 4, + "type": "CheckpointLoaderSimple", + "pos": [ + -5, + 212 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 10 + ], + "slot_index": 0 + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 3, + 5, + 11, + 16 + ], + "slot_index": 1 + }, + { + "name": "VAE", + "type": "VAE", + "links": [ + 12, + 31 + ], + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "V07_v07.safetensors" + ] + }, + { + "id": 25, + "type": "PreviewImage", + "pos": [ + 2175, + 1079 + ], + "size": { + "0": 516, + "1": 424 + }, + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 41 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 13, + "type": "KSamplerAdvancedProvider", + "pos": [ + 1727, + 192 + ], + "size": { + "0": 355.20001220703125, + "1": 154 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 19 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 42 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 8, + "fixed", + "normal" + ] + }, + { + "id": 16, + "type": "EmptyLatentImage", + "pos": [ + 532, + 1143 + ], + "size": { + "0": 315, + "1": 106 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 28, + 45 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EmptyLatentImage" + }, + "widgets_values": [ + 792, + 512, + 1 + ] + }, + { + "id": 19, + "type": "KSampler", + "pos": [ + 1194.657802060547, + 1075.971700888672 + ], + "size": [ + 315, + 473.9999771118164 + ], + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 34 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 35 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 36 + }, + { + "name": "latent_image", + "type": "LATENT", + "link": 28 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 30 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSampler" + }, + "widgets_values": [ + 1107040072933062, + "fixed", + 20, + 8, + "euler", + "normal", + 1 + ] + }, + { + "id": 27, + "type": "TwoAdvancedSamplersForMask", + "pos": [ + 2187, + 266 + ], + "size": [ + 315, + 426.00000762939453 + ], + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 45 + }, + { + "name": "base_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 42 + }, + { + "name": "mask_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 43 + }, + { + "name": "mask", + "type": "MASK", + "link": 44 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 46 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "TwoAdvancedSamplersForMask" + }, + "widgets_values": [ + 1107040072933062, + "fixed", + 20, + 1, + 10 + ] + }, + { + "id": 23, + "type": "PreviewBridge", + "pos": [ + 1778, + 1098 + ], + "size": { + "0": 315, + "1": 290 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 37 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": null, + "shape": 3 + }, + { + "name": "MASK", + "type": "MASK", + "links": [ + 44 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "PreviewBridge" + }, + "widgets_values": [ + { + "filename": "clipspace-mask-348148.69999999925.png", + "subfolder": "clipspace", + "type": "input", + "image_hash": 492469318636598500, + "forward_filename": "ComfyUI_00001_.png", + "forward_subfolder": "", + "forward_type": "temp" + } + ] + }, + { + "id": 15, + "type": "KSamplerAdvancedProvider", + "pos": [ + 1719, + 592 + ], + "size": { + "0": 355.20001220703125, + "1": 154 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 20 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 43 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 8, + "fixed", + "normal" + ] + }, + { + "id": 20, + "type": "VAEDecode", + "pos": [ + 1546, + 972 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 30 + }, + { + "name": "vae", + "type": "VAE", + "link": 31 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 37 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + } + ], + "links": [ + [ + 3, + 4, + 1, + 6, + 0, + "CLIP" + ], + [ + 5, + 4, + 1, + 7, + 0, + "CLIP" + ], + [ + 10, + 4, + 0, + 10, + 0, + "MODEL" + ], + [ + 11, + 4, + 1, + 10, + 1, + "CLIP" + ], + [ + 12, + 4, + 2, + 10, + 2, + "VAE" + ], + [ + 13, + 6, + 0, + 10, + 3, + "CONDITIONING" + ], + [ + 14, + 7, + 0, + 10, + 4, + "CONDITIONING" + ], + [ + 15, + 10, + 0, + 11, + 0, + "BASIC_PIPE" + ], + [ + 16, + 4, + 1, + 12, + 0, + "CLIP" + ], + [ + 17, + 12, + 0, + 11, + 4, + "CONDITIONING" + ], + [ + 19, + 10, + 0, + 13, + 0, + "BASIC_PIPE" + ], + [ + 20, + 11, + 0, + 15, + 0, + "BASIC_PIPE" + ], + [ + 28, + 16, + 0, + 19, + 3, + "LATENT" + ], + [ + 30, + 19, + 0, + 20, + 0, + "LATENT" + ], + [ + 31, + 4, + 2, + 20, + 1, + "VAE" + ], + [ + 33, + 10, + 0, + 22, + 0, + "BASIC_PIPE" + ], + [ + 34, + 22, + 0, + 19, + 0, + "MODEL" + ], + [ + 35, + 22, + 3, + 19, + 1, + "CONDITIONING" + ], + [ + 36, + 22, + 4, + 19, + 2, + "CONDITIONING" + ], + [ + 37, + 20, + 0, + 23, + 0, + "IMAGE" + ], + [ + 40, + 22, + 2, + 24, + 1, + "VAE" + ], + [ + 41, + 24, + 0, + 25, + 0, + "IMAGE" + ], + [ + 42, + 13, + 0, + 27, + 1, + "KSAMPLER_ADVANCED" + ], + [ + 43, + 15, + 0, + 27, + 2, + "KSAMPLER_ADVANCED" + ], + [ + 44, + 23, + 1, + 27, + 3, + "MASK" + ], + [ + 45, + 16, + 0, + 27, + 0, + "LATENT" + ], + [ + 46, + 27, + 0, + 24, + 0, + "LATENT" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/test/detailer-pipe-test-sdxl.json b/custom_nodes/ComfyUI-Impact-Pack/test/detailer-pipe-test-sdxl.json new file mode 100644 index 0000000000000000000000000000000000000000..cf510fdf3a242b233f923185c8fb22109c68268f --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/test/detailer-pipe-test-sdxl.json @@ -0,0 +1,1989 @@ +{ + "last_node_id": 52, + "last_link_id": 150, + "nodes": [ + { + "id": 12, + "type": "CLIPTextEncodeSDXLRefiner", + "pos": [ + 480, + 990 + ], + "size": { + "0": 400, + "1": 200 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 11 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 13 + ], + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncodeSDXLRefiner" + }, + "widgets_values": [ + 6, + 1024, + 1024, + "ugly, male, western" + ] + }, + { + "id": 14, + "type": "UltralyticsDetectorProvider", + "pos": [ + 963, + 955 + ], + "size": { + "0": 315, + "1": 78 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "BBOX_DETECTOR", + "type": "BBOX_DETECTOR", + "links": [ + 16 + ], + "shape": 3 + }, + { + "name": "SEGM_DETECTOR", + "type": "SEGM_DETECTOR", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "UltralyticsDetectorProvider" + }, + "widgets_values": [ + "bbox/face_yolov8m.pt" + ] + }, + { + "id": 18, + "type": "PreviewImage", + "pos": [ + 3270, + 810 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 21, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 20 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 15, + "type": "SAMLoader", + "pos": [ + 967, + 1086 + ], + "size": { + "0": 315, + "1": 82 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "SAM_MODEL", + "type": "SAM_MODEL", + "links": [ + 17 + ], + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "SAMLoader" + }, + "widgets_values": [ + "sam_vit_b_01ec64.pth", + "CPU" + ] + }, + { + "id": 9, + "type": "CLIPTextEncodeSDXL", + "pos": [ + 640, + -550 + ], + "size": { + "0": 400, + "1": 270 + }, + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 6 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 9 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncodeSDXL" + }, + "widgets_values": [ + 1024, + 1024, + 0, + 0, + 1024, + 1024, + "a closeup photograph of cute girl", + "closeup" + ] + }, + { + "id": 7, + "type": "CheckpointLoaderSimple", + "pos": [ + 60, + -580 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 2, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 1 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 2, + 6, + 7 + ], + "shape": 3, + "slot_index": 1 + }, + { + "name": "VAE", + "type": "VAE", + "links": [ + 3 + ], + "shape": 3, + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "SDXL/rundiffusionXL_beta.safetensors" + ] + }, + { + "id": 13, + "type": "LoadImage", + "pos": [ + 257, + 164 + ], + "size": { + "0": 315, + "1": 314 + }, + "flags": {}, + "order": 3, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 15, + 64, + 112 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "MASK", + "type": "MASK", + "links": [], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "chunli.png", + "image" + ] + }, + { + "id": 10, + "type": "CLIPTextEncodeSDXL", + "pos": [ + 640, + -230 + ], + "size": { + "0": 400, + "1": 270 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 7 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 8 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncodeSDXL" + }, + "widgets_values": [ + 1024, + 1024, + 0, + 0, + 1024, + 1024, + "ugly, male", + "ugly, male" + ] + }, + { + "id": 17, + "type": "PreviewImage", + "pos": [ + 3270, + 450 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 20, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 19 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 8, + "type": "CheckpointLoaderSimple", + "pos": [ + 120, + 590 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 4, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 69 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 5, + 10, + 11 + ], + "shape": 3, + "slot_index": 1 + }, + { + "name": "VAE", + "type": "VAE", + "links": null, + "shape": 3, + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "SDXL/sd_xl_refiner_1.0_0.9vae.safetensors" + ] + }, + { + "id": 11, + "type": "CLIPTextEncodeSDXLRefiner", + "pos": [ + 483, + 738 + ], + "size": { + "0": 400, + "1": 200 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 10 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 70 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncodeSDXLRefiner" + }, + "widgets_values": [ + 6, + 1024, + 1024, + "high quality" + ] + }, + { + "id": 37, + "type": "PreviewImage", + "pos": [ + 2810, + -280 + ], + "size": { + "0": 344.04876708984375, + "1": 580.6563720703125 + }, + "flags": {}, + "order": 7, + "mode": 2, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 64 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 16, + "type": "PreviewImage", + "pos": [ + 3200, + -280 + ], + "size": { + "0": 336.36944580078125, + "1": 585.6206665039062 + }, + "flags": {}, + "order": 18, + "mode": 2, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 18 + } + ], + "title": "SDXL Base only", + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 6, + "type": "ToDetailerPipeSDXL", + "pos": [ + 1199, + 379 + ], + "size": { + "0": 400, + "1": 340 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 1 + }, + { + "name": "clip", + "type": "CLIP", + "link": 2 + }, + { + "name": "vae", + "type": "VAE", + "link": 3 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 9 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 8 + }, + { + "name": "refiner_model", + "type": "MODEL", + "link": 69 + }, + { + "name": "refiner_clip", + "type": "CLIP", + "link": 5, + "slot_index": 6 + }, + { + "name": "refiner_positive", + "type": "CONDITIONING", + "link": 70 + }, + { + "name": "refiner_negative", + "type": "CONDITIONING", + "link": 13, + "slot_index": 8 + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 16, + "slot_index": 9 + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 17, + "slot_index": 10 + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": null + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 114 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ToDetailerPipeSDXL" + }, + "widgets_values": [ + "", + "Select the LoRA to add to the text" + ] + }, + { + "id": 38, + "type": "PreviewImage", + "pos": [ + 3590, + -280 + ], + "size": { + "0": 336.36944580078125, + "1": 585.6206665039062 + }, + "flags": {}, + "order": 19, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 67 + } + ], + "title": "SDXL Base + Refiner", + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 41, + "type": "BasicPipeToDetailerPipeSDXL", + "pos": [ + 2160, + 1010 + ], + "size": { + "0": 405.5999755859375, + "1": 200 + }, + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "base_basic_pipe", + "type": "BASIC_PIPE", + "link": 87 + }, + { + "name": "refiner_basic_pipe", + "type": "BASIC_PIPE", + "link": 88 + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 133 + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 134, + "slot_index": 3 + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": 135, + "slot_index": 4 + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": 136, + "slot_index": 5 + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 86, + 110 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BasicPipeToDetailerPipeSDXL" + }, + "widgets_values": [ + "", + "Select the LoRA to add to the text" + ] + }, + { + "id": 44, + "type": "FaceDetailerPipe", + "pos": [ + 3565, + 427 + ], + "size": { + "0": 456, + "1": 902 + }, + "flags": {}, + "order": 22, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 104, + "slot_index": 0 + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 103 + } + ], + "outputs": [ + { + "name": "image", + "type": "IMAGE", + "links": [], + "shape": 3, + "slot_index": 0 + }, + { + "name": "cropped_refined", + "type": "IMAGE", + "links": [], + "shape": 6, + "slot_index": 1 + }, + { + "name": "cropped_enhanced_alpha", + "type": "IMAGE", + "links": [ + 105 + ], + "shape": 6, + "slot_index": 2 + }, + { + "name": "mask", + "type": "MASK", + "links": null, + "shape": 3 + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": null, + "shape": 3 + }, + { + "name": "cnet_images", + "type": "IMAGE", + "links": null, + "shape": 6 + } + ], + "properties": { + "Node name for S&R": "FaceDetailerPipe" + }, + "widgets_values": [ + 1024, + false, + 768, + 104033248204033, + "fixed", + 30, + 8, + "euler", + "normal", + 0.5, + 5, + true, + true, + 0.6, + 30, + 3, + "center-1", + 30, + 0.93, + 0, + 0.7, + "False", + 10, + 0.1 + ] + }, + { + "id": 45, + "type": "PreviewImage", + "pos": [ + 4109.76494140625, + 483.81650390625 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 24, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 105 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 1, + "type": "FaceDetailerPipe", + "pos": [ + 2720, + 430 + ], + "size": { + "0": 456, + "1": 902 + }, + "flags": {}, + "order": 16, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 15, + "slot_index": 0 + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 86 + } + ], + "outputs": [ + { + "name": "image", + "type": "IMAGE", + "links": [ + 18, + 67, + 104, + 106 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "cropped_refined", + "type": "IMAGE", + "links": [ + 19 + ], + "shape": 6, + "slot_index": 1 + }, + { + "name": "cropped_enhanced_alpha", + "type": "IMAGE", + "links": [ + 20 + ], + "shape": 6, + "slot_index": 2 + }, + { + "name": "mask", + "type": "MASK", + "links": null, + "shape": 3 + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 103 + ], + "shape": 3, + "slot_index": 4 + }, + { + "name": "cnet_images", + "type": "IMAGE", + "links": null, + "shape": 6 + } + ], + "properties": { + "Node name for S&R": "FaceDetailerPipe" + }, + "widgets_values": [ + 1024, + false, + 768, + 104033248204033, + "fixed", + 30, + 8, + "euler", + "normal", + 0.5, + 5, + true, + true, + 0.6, + 30, + 3, + "center-1", + 30, + 0.93, + 0, + 0.7, + "False", + 10, + 0.1 + ] + }, + { + "id": 43, + "type": "ToBasicPipe", + "pos": [ + 1790, + 1130 + ], + "size": { + "0": 241.79998779296875, + "1": 106 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 142 + }, + { + "name": "clip", + "type": "CLIP", + "link": 143 + }, + { + "name": "vae", + "type": "VAE", + "link": 145, + "slot_index": 2 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 149 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 150 + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 88, + 108 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ToBasicPipe" + } + }, + { + "id": 49, + "type": "ImpactSimpleDetectorSEGSPipe", + "pos": [ + 2236.375298828125, + 1520.8711416015626 + ], + "size": { + "0": 315, + "1": 246 + }, + "flags": {}, + "order": 17, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 110, + "slot_index": 0 + }, + { + "name": "image", + "type": "IMAGE", + "link": 112, + "slot_index": 1 + } + ], + "outputs": [ + { + "name": "SEGS", + "type": "SEGS", + "links": [ + 111 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactSimpleDetectorSEGSPipe" + }, + "widgets_values": [ + 0.5, + 0, + 3, + 10, + 0.5, + 0, + 0, + 0.7 + ] + }, + { + "id": 47, + "type": "DetailerForEachPipe", + "pos": [ + 2725, + 1448 + ], + "size": { + "0": 456.5638732910156, + "1": 559.1150512695312 + }, + "flags": {}, + "order": 23, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 106 + }, + { + "name": "segs", + "type": "SEGS", + "link": 111, + "slot_index": 1 + }, + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 107, + "slot_index": 2 + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null + }, + { + "name": "refiner_basic_pipe_opt", + "type": "BASIC_PIPE", + "link": 108 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 113 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "segs", + "type": "SEGS", + "links": null, + "shape": 3 + }, + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": null, + "shape": 3 + }, + { + "name": "cnet_images", + "type": "IMAGE", + "links": null, + "shape": 6 + } + ], + "properties": { + "Node name for S&R": "DetailerForEachPipe" + }, + "widgets_values": [ + 256, + true, + 768, + 450265819682234, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + true, + "", + 0.2 + ] + }, + { + "id": 50, + "type": "PreviewImage", + "pos": [ + 3448.7228955078117, + 1463.962194335937 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 25, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 113 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 40, + "type": "ToDetailerPipeSDXL", + "pos": [ + 2226, + 539 + ], + "size": { + "0": 400, + "1": 340 + }, + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 125 + }, + { + "name": "clip", + "type": "CLIP", + "link": 116, + "slot_index": 1 + }, + { + "name": "vae", + "type": "VAE", + "link": 117 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 120, + "slot_index": 3 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 121 + }, + { + "name": "refiner_model", + "type": "MODEL", + "link": 124, + "slot_index": 5 + }, + { + "name": "refiner_clip", + "type": "CLIP", + "link": 126 + }, + { + "name": "refiner_positive", + "type": "CONDITIONING", + "link": 127, + "slot_index": 7 + }, + { + "name": "refiner_negative", + "type": "CONDITIONING", + "link": 128 + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 129 + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 130, + "slot_index": 10 + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": 131 + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": 132, + "slot_index": 12 + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ToDetailerPipeSDXL" + }, + "widgets_values": [ + "", + "SDXL/person/IU_leejieun_SDXL.safetensors" + ] + }, + { + "id": 42, + "type": "ToBasicPipe", + "pos": [ + 1899, + 906 + ], + "size": { + "0": 241.79998779296875, + "1": 106 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 137 + }, + { + "name": "clip", + "type": "CLIP", + "link": 138, + "slot_index": 1 + }, + { + "name": "vae", + "type": "VAE", + "link": 139, + "slot_index": 2 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 147, + "slot_index": 3 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 148, + "slot_index": 4 + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 87, + 107 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ToBasicPipe" + } + }, + { + "id": 51, + "type": "FromDetailerPipeSDXL", + "pos": [ + 1650, + 520 + ], + "size": { + "0": 393, + "1": 286 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 114 + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": null, + "shape": 3 + }, + { + "name": "model", + "type": "MODEL", + "links": [ + 125, + 137 + ], + "shape": 3, + "slot_index": 1 + }, + { + "name": "clip", + "type": "CLIP", + "links": [ + 116, + 138, + 143 + ], + "shape": 3, + "slot_index": 2 + }, + { + "name": "vae", + "type": "VAE", + "links": [ + 117, + 139, + 145 + ], + "shape": 3, + "slot_index": 3 + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": [ + 120, + 147 + ], + "shape": 3, + "slot_index": 4 + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": [ + 121, + 148 + ], + "shape": 3, + "slot_index": 5 + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "links": [ + 129, + 133 + ], + "shape": 3, + "slot_index": 6 + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "links": [ + 130, + 134 + ], + "shape": 3, + "slot_index": 7 + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "links": [ + 131, + 135 + ], + "shape": 3, + "slot_index": 8 + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "links": [ + 132, + 136 + ], + "shape": 3, + "slot_index": 9 + }, + { + "name": "refiner_model", + "type": "MODEL", + "links": [ + 124, + 142 + ], + "shape": 3, + "slot_index": 10 + }, + { + "name": "refiner_clip", + "type": "CLIP", + "links": [ + 126 + ], + "shape": 3, + "slot_index": 11 + }, + { + "name": "refiner_positive", + "type": "CONDITIONING", + "links": [ + 127, + 149 + ], + "shape": 3, + "slot_index": 12 + }, + { + "name": "refiner_negative", + "type": "CONDITIONING", + "links": [ + 128, + 150 + ], + "shape": 3, + "slot_index": 13 + } + ], + "properties": { + "Node name for S&R": "FromDetailerPipeSDXL" + } + } + ], + "links": [ + [ + 1, + 7, + 0, + 6, + 0, + "MODEL" + ], + [ + 2, + 7, + 1, + 6, + 1, + "CLIP" + ], + [ + 3, + 7, + 2, + 6, + 2, + "VAE" + ], + [ + 5, + 8, + 1, + 6, + 6, + "CLIP" + ], + [ + 6, + 7, + 1, + 9, + 0, + "CLIP" + ], + [ + 7, + 7, + 1, + 10, + 0, + "CLIP" + ], + [ + 8, + 10, + 0, + 6, + 4, + "CONDITIONING" + ], + [ + 9, + 9, + 0, + 6, + 3, + "CONDITIONING" + ], + [ + 10, + 8, + 1, + 11, + 0, + "CLIP" + ], + [ + 11, + 8, + 1, + 12, + 0, + "CLIP" + ], + [ + 13, + 12, + 0, + 6, + 8, + "CONDITIONING" + ], + [ + 15, + 13, + 0, + 1, + 0, + "IMAGE" + ], + [ + 16, + 14, + 0, + 6, + 9, + "BBOX_DETECTOR" + ], + [ + 17, + 15, + 0, + 6, + 10, + "SAM_MODEL" + ], + [ + 18, + 1, + 0, + 16, + 0, + "IMAGE" + ], + [ + 19, + 1, + 1, + 17, + 0, + "IMAGE" + ], + [ + 20, + 1, + 2, + 18, + 0, + "IMAGE" + ], + [ + 64, + 13, + 0, + 37, + 0, + "IMAGE" + ], + [ + 67, + 1, + 0, + 38, + 0, + "IMAGE" + ], + [ + 69, + 8, + 0, + 6, + 5, + "MODEL" + ], + [ + 70, + 11, + 0, + 6, + 7, + "CONDITIONING" + ], + [ + 86, + 41, + 0, + 1, + 1, + "DETAILER_PIPE" + ], + [ + 87, + 42, + 0, + 41, + 0, + "BASIC_PIPE" + ], + [ + 88, + 43, + 0, + 41, + 1, + "BASIC_PIPE" + ], + [ + 103, + 1, + 4, + 44, + 1, + "DETAILER_PIPE" + ], + [ + 104, + 1, + 0, + 44, + 0, + "IMAGE" + ], + [ + 105, + 44, + 2, + 45, + 0, + "IMAGE" + ], + [ + 106, + 1, + 0, + 47, + 0, + "IMAGE" + ], + [ + 107, + 42, + 0, + 47, + 2, + "BASIC_PIPE" + ], + [ + 108, + 43, + 0, + 47, + 4, + "BASIC_PIPE" + ], + [ + 110, + 41, + 0, + 49, + 0, + "DETAILER_PIPE" + ], + [ + 111, + 49, + 0, + 47, + 1, + "SEGS" + ], + [ + 112, + 13, + 0, + 49, + 1, + "IMAGE" + ], + [ + 113, + 47, + 0, + 50, + 0, + "IMAGE" + ], + [ + 114, + 6, + 0, + 51, + 0, + "DETAILER_PIPE" + ], + [ + 116, + 51, + 2, + 40, + 1, + "CLIP" + ], + [ + 117, + 51, + 3, + 40, + 2, + "VAE" + ], + [ + 120, + 51, + 4, + 40, + 3, + "CONDITIONING" + ], + [ + 121, + 51, + 5, + 40, + 4, + "CONDITIONING" + ], + [ + 124, + 51, + 10, + 40, + 5, + "MODEL" + ], + [ + 125, + 51, + 1, + 40, + 0, + "MODEL" + ], + [ + 126, + 51, + 11, + 40, + 6, + "CLIP" + ], + [ + 127, + 51, + 12, + 40, + 7, + "CONDITIONING" + ], + [ + 128, + 51, + 13, + 40, + 8, + "CONDITIONING" + ], + [ + 129, + 51, + 6, + 40, + 9, + "BBOX_DETECTOR" + ], + [ + 130, + 51, + 7, + 40, + 10, + "SAM_MODEL" + ], + [ + 131, + 51, + 8, + 40, + 11, + "SEGM_DETECTOR" + ], + [ + 132, + 51, + 9, + 40, + 12, + "DETAILER_HOOK" + ], + [ + 133, + 51, + 6, + 41, + 2, + "BBOX_DETECTOR" + ], + [ + 134, + 51, + 7, + 41, + 3, + "SAM_MODEL" + ], + [ + 135, + 51, + 8, + 41, + 4, + "SEGM_DETECTOR" + ], + [ + 136, + 51, + 9, + 41, + 5, + "DETAILER_HOOK" + ], + [ + 137, + 51, + 1, + 42, + 0, + "MODEL" + ], + [ + 138, + 51, + 2, + 42, + 1, + "CLIP" + ], + [ + 139, + 51, + 3, + 42, + 2, + "VAE" + ], + [ + 142, + 51, + 10, + 43, + 0, + "MODEL" + ], + [ + 143, + 51, + 2, + 43, + 1, + "CLIP" + ], + [ + 145, + 51, + 3, + 43, + 2, + "VAE" + ], + [ + 147, + 51, + 4, + 42, + 3, + "CONDITIONING" + ], + [ + 148, + 51, + 5, + 42, + 4, + "CONDITIONING" + ], + [ + 149, + 51, + 12, + 43, + 3, + "CONDITIONING" + ], + [ + 150, + 51, + 13, + 43, + 4, + "CONDITIONING" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/test/detailer-pipe-test.json b/custom_nodes/ComfyUI-Impact-Pack/test/detailer-pipe-test.json new file mode 100644 index 0000000000000000000000000000000000000000..56ed4239197a59dfb2fc1aa72facff138279aead --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/test/detailer-pipe-test.json @@ -0,0 +1,3489 @@ +{ + "last_node_id": 87, + "last_link_id": 214, + "nodes": [ + { + "id": 7, + "type": "CLIPTextEncode", + "pos": [ + 413, + 389 + ], + "size": { + "0": 425.27801513671875, + "1": 180.6060791015625 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 5, + "label": "clip" + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 6 + ], + "slot_index": 0, + "label": "CONDITIONING" + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "text, watermark, worst quality:1.4, low quality:1.4" + ] + }, + { + "id": 6, + "type": "CLIPTextEncode", + "pos": [ + 415, + 186 + ], + "size": { + "0": 422.84503173828125, + "1": 164.31304931640625 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 3, + "label": "clip" + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 4 + ], + "slot_index": 0, + "label": "CONDITIONING" + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, best quality:1.4, 2 girls on table " + ] + }, + { + "id": 5, + "type": "EmptyLatentImage", + "pos": [ + 473, + 609 + ], + "size": { + "0": 315, + "1": 106 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 2 + ], + "slot_index": 0, + "label": "LATENT" + } + ], + "properties": { + "Node name for S&R": "EmptyLatentImage" + }, + "widgets_values": [ + 1024, + 768, + 1 + ] + }, + { + "id": 8, + "type": "VAEDecode", + "pos": [ + 1209, + 188 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 7, + "label": "samples" + }, + { + "name": "vae", + "type": "VAE", + "link": 8, + "label": "vae" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 10 + ], + "slot_index": 0, + "label": "IMAGE" + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + }, + { + "id": 30, + "type": "PreviewImage", + "pos": [ + 2532, + -7 + ], + "size": { + "0": 575.2411499023438, + "1": 561.0116577148438 + }, + "flags": {}, + "order": 16, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 179, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 24, + "type": "SAMLoader", + "pos": [ + 861, + 1300 + ], + "size": { + "0": 315, + "1": 82 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "SAM_MODEL", + "type": "SAM_MODEL", + "links": [ + 19, + 33 + ], + "shape": 3, + "slot_index": 0, + "label": "SAM_MODEL" + } + ], + "properties": { + "Node name for S&R": "SAMLoader" + }, + "widgets_values": [ + "sam_vit_b_01ec64.pth", + "AUTO" + ] + }, + { + "id": 32, + "type": "BasicPipeToDetailerPipe", + "pos": [ + 1396, + 1143 + ], + "size": { + "0": 400, + "1": 200 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 34, + "label": "basic_pipe" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 202, + "slot_index": 1, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 33, + "slot_index": 2, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": 213, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 36 + ], + "shape": 3, + "slot_index": 0, + "label": "detailer_pipe" + } + ], + "properties": { + "Node name for S&R": "BasicPipeToDetailerPipe" + }, + "widgets_values": [ + "photorealistic:1.4, best quality:1.4, detailed eyes, \n__face_loras__ [faint smile|surprise|laugh]", + "Select the LoRA to add to the text" + ] + }, + { + "id": 36, + "type": "MaskToImage", + "pos": [ + 2650, + 1230 + ], + "size": { + "0": 210, + "1": 26 + }, + "flags": {}, + "order": 20, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 182, + "label": "mask" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 59 + ], + "shape": 3, + "slot_index": 0, + "label": "IMAGE" + } + ], + "properties": { + "Node name for S&R": "MaskToImage" + } + }, + { + "id": 52, + "type": "BboxDetectorSEGS", + "pos": [ + 4948, + 677 + ], + "size": { + "0": 315, + "1": 150 + }, + "flags": {}, + "order": 33, + "mode": 0, + "inputs": [ + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 85, + "label": "bbox_detector" + }, + { + "name": "image", + "type": "IMAGE", + "link": 188, + "label": "image" + } + ], + "outputs": [ + { + "name": "SEGS", + "type": "SEGS", + "links": [ + 87, + 160 + ], + "shape": 3, + "slot_index": 0, + "label": "SEGS" + } + ], + "properties": { + "Node name for S&R": "BboxDetectorSEGS" + }, + "widgets_values": [ + 0.5, + 10, + 3, + 10 + ] + }, + { + "id": 46, + "type": "DetailerPipeToBasicPipe", + "pos": [ + 4753, + 1188 + ], + "size": { + "0": 304.79998779296875, + "1": 26 + }, + "flags": {}, + "order": 31, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 77, + "label": "detailer_pipe" + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 155, + 196 + ], + "shape": 3, + "slot_index": 0, + "label": "basic_pipe" + } + ], + "properties": { + "Node name for S&R": "DetailerPipeToBasicPipe" + } + }, + { + "id": 60, + "type": "PreviewImage", + "pos": [ + 6270, + 2420 + ], + "size": { + "0": 600, + "1": 670 + }, + "flags": {}, + "order": 39, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 166, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 57, + "type": "PreviewImage", + "pos": [ + 5997, + 1424 + ], + "size": { + "0": 840, + "1": 640 + }, + "flags": {}, + "order": 46, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 144, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 54, + "type": "PreviewImage", + "pos": [ + 6486, + 705 + ], + "size": { + "0": 740, + "1": 580 + }, + "flags": {}, + "order": 51, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 197, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 64, + "type": "PreviewImage", + "pos": [ + 6800, + -300 + ], + "size": { + "0": 570, + "1": 590 + }, + "flags": {}, + "order": 47, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 156, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 42, + "type": "PreviewImage", + "pos": [ + 4070, + 636 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 26, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 187, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 43, + "type": "MaskToImage", + "pos": [ + 4081, + 949 + ], + "size": { + "0": 176.39999389648438, + "1": 26 + }, + "flags": {}, + "order": 28, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 190, + "label": "mask" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 75 + ], + "shape": 3, + "slot_index": 0, + "label": "IMAGE" + } + ], + "properties": { + "Node name for S&R": "MaskToImage" + } + }, + { + "id": 44, + "type": "PreviewImage", + "pos": [ + 4072, + 1029 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 30, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 75, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 37, + "type": "PreviewImage", + "pos": [ + 2890, + 1250 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 22, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 59, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 22, + "type": "BasicPipeToDetailerPipe", + "pos": [ + 1396, + 866 + ], + "size": { + "0": 400, + "1": 200 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 17, + "label": "basic_pipe" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 201, + "slot_index": 1, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 19, + "slot_index": 2, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": 212, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [], + "shape": 3, + "slot_index": 0, + "label": "detailer_pipe" + } + ], + "properties": { + "Node name for S&R": "BasicPipeToDetailerPipe" + }, + "widgets_values": [ + "photorealistic:1.4, best quality:1.4, detailed eyes, \n[|||] [faint smile|surprise|laugh]", + "Select the LoRA to add to the text" + ] + }, + { + "id": 75, + "type": "PreviewImage", + "pos": [ + 2600, + 1330 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 19, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 181, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 10, + "type": "PreviewBridge", + "pos": [ + 1462, + 175 + ], + "size": { + "0": 315, + "1": 290 + }, + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 10, + "label": "images" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 169, + 183 + ], + "shape": 3, + "slot_index": 0, + "label": "IMAGE" + }, + { + "name": "MASK", + "type": "MASK", + "links": null, + "shape": 3, + "label": "MASK" + } + ], + "properties": { + "Node name for S&R": "PreviewBridge" + }, + "widgets_values": [ + "#placeholder" + ] + }, + { + "id": 41, + "type": "PreviewImage", + "pos": [ + 4301, + 119 + ], + "size": { + "0": 492.20916748046875, + "1": 448.6293029785156 + }, + "flags": {}, + "order": 25, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 186, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 78, + "type": "PreviewImage", + "pos": [ + 4075, + 1364 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 27, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 189, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 3, + "type": "KSampler", + "pos": [ + 863, + 183 + ], + "size": { + "0": 315, + "1": 474 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 1, + "label": "model" + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 4, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 6, + "label": "negative" + }, + { + "name": "latent_image", + "type": "LATENT", + "link": 2, + "label": "latent_image" + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 7 + ], + "slot_index": 0, + "label": "LATENT" + } + ], + "properties": { + "Node name for S&R": "KSampler" + }, + "widgets_values": [ + 885412539640489, + "fixed", + 15, + 8, + "euler", + "normal", + 1 + ] + }, + { + "id": 45, + "type": "EditDetailerPipe", + "pos": [ + 4338, + 950 + ], + "size": { + "0": 284.0971374511719, + "1": 316.5133361816406 + }, + "flags": {}, + "order": 29, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 191, + "label": "detailer_pipe" + }, + { + "name": "model", + "type": "MODEL", + "link": null, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "link": null, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "link": null, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": null, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": null, + "label": "negative" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": null, + "label": "bbox_detector" + }, + { + "name": "sam_model", + "type": "SAM_MODEL", + "link": null, + "label": "sam_model" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": null, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 77, + 82 + ], + "shape": 3, + "slot_index": 0, + "label": "detailer_pipe" + } + ], + "properties": { + "Node name for S&R": "EditDetailerPipe" + }, + "widgets_values": [ + "", + "Select the LoRA to add to the text" + ] + }, + { + "id": 65, + "type": "PreviewImage", + "pos": [ + 6430, + -300 + ], + "size": { + "0": 330, + "1": 250 + }, + "flags": {}, + "order": 48, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 157, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 53, + "type": "MaskToSEGS", + "pos": [ + 5558, + 989 + ], + "size": { + "0": 315, + "1": 130 + }, + "flags": {}, + "order": 38, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 88, + "label": "mask" + } + ], + "outputs": [ + { + "name": "SEGS", + "type": "SEGS", + "links": [ + 138, + 154, + 195 + ], + "shape": 3, + "slot_index": 0, + "label": "SEGS" + } + ], + "properties": { + "Node name for S&R": "MaskToSEGS" + }, + "widgets_values": [ + false, + 3, + false, + 10 + ] + }, + { + "id": 81, + "type": "DetailerForEachPipe", + "pos": [ + 6092, + 708 + ], + "size": { + "0": 329.5368957519531, + "1": 598 + }, + "flags": {}, + "order": 45, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 194, + "label": "image" + }, + { + "name": "segs", + "type": "SEGS", + "link": 195, + "label": "segs" + }, + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 196, + "label": "basic_pipe" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 197 + ], + "shape": 3, + "slot_index": 0, + "label": "IMAGE" + } + ], + "properties": { + "Node name for S&R": "DetailerForEachPipe" + }, + "widgets_values": [ + 256, + true, + 768, + 44457634171318, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + false, + "" + ] + }, + { + "id": 72, + "type": "DetailerForEachDebugPipe", + "pos": [ + 5938, + -58 + ], + "size": { + "0": 330, + "1": 618 + }, + "flags": {}, + "order": 44, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 153, + "label": "image" + }, + { + "name": "segs", + "type": "SEGS", + "link": 154, + "label": "segs" + }, + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 155, + "label": "basic_pipe" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "image", + "type": "IMAGE", + "links": [ + 156 + ], + "shape": 3, + "slot_index": 0, + "label": "image" + }, + { + "name": "cropped", + "type": "IMAGE", + "links": [ + 157 + ], + "shape": 6, + "slot_index": 1, + "label": "cropped" + }, + { + "name": "cropped_refined", + "type": "IMAGE", + "links": [ + 158 + ], + "shape": 6, + "slot_index": 2, + "label": "cropped_refined" + }, + { + "name": "cropped_refined_alpha", + "type": "IMAGE", + "links": [ + 200 + ], + "shape": 6, + "slot_index": 3, + "label": "cropped_refined_alpha" + } + ], + "properties": { + "Node name for S&R": "DetailerForEachDebugPipe" + }, + "widgets_values": [ + 256, + true, + 768, + 0, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + false, + "" + ] + }, + { + "id": 66, + "type": "PreviewImage", + "pos": [ + 6430, + 30 + ], + "size": { + "0": 330, + "1": 260 + }, + "flags": {}, + "order": 49, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 158, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 82, + "type": "PreviewImage", + "pos": [ + 6435, + 355 + ], + "size": { + "0": 319.2451171875, + "1": 285.4361572265625 + }, + "flags": {}, + "order": 50, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 200, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 83, + "type": "UltralyticsDetectorProvider", + "pos": [ + 860, + 1160 + ], + "size": { + "0": 315, + "1": 78 + }, + "flags": {}, + "order": 2, + "mode": 0, + "outputs": [ + { + "name": "BBOX_DETECTOR", + "type": "BBOX_DETECTOR", + "links": [ + 201, + 202 + ], + "shape": 3, + "slot_index": 0, + "label": "BBOX_DETECTOR" + }, + { + "name": "SEGM_DETECTOR", + "type": "SEGM_DETECTOR", + "links": null, + "shape": 3, + "slot_index": 1, + "label": "SEGM_DETECTOR" + } + ], + "properties": { + "Node name for S&R": "UltralyticsDetectorProvider" + }, + "widgets_values": [ + "bbox/face_yolov8m.pt" + ] + }, + { + "id": 69, + "type": "DetailerForEach", + "pos": [ + 5610, + 1425 + ], + "size": { + "0": 315, + "1": 678 + }, + "flags": {}, + "order": 43, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 137, + "label": "image" + }, + { + "name": "segs", + "type": "SEGS", + "link": 138, + "label": "segs" + }, + { + "name": "model", + "type": "MODEL", + "link": 139, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "link": 140, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "link": 141, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 142, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 143, + "label": "negative" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 144 + ], + "shape": 3, + "slot_index": 0, + "label": "IMAGE" + } + ], + "properties": { + "Node name for S&R": "DetailerForEach" + }, + "widgets_values": [ + 256, + true, + 768, + 0, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + false, + "" + ] + }, + { + "id": 50, + "type": "FromDetailerPipe", + "pos": [ + 4730, + 1460 + ], + "size": { + "0": 342.5999755859375, + "1": 186 + }, + "flags": {}, + "order": 32, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 82, + "label": "detailer_pipe" + } + ], + "outputs": [ + { + "name": "model", + "type": "MODEL", + "links": [ + 139, + 161 + ], + "shape": 3, + "slot_index": 0, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "links": [ + 140, + 162 + ], + "shape": 3, + "slot_index": 1, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "links": [ + 141, + 163 + ], + "shape": 3, + "slot_index": 2, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": [ + 142, + 164 + ], + "shape": 3, + "slot_index": 3, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": [ + 143, + 165 + ], + "shape": 3, + "slot_index": 4, + "label": "negative" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "links": [ + 85 + ], + "shape": 3, + "slot_index": 5, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "links": [ + 83 + ], + "shape": 3, + "slot_index": 6, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "links": [ + 204 + ], + "shape": 3, + "slot_index": 7, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "links": null, + "shape": 3, + "label": "detailer_hook" + } + ], + "properties": { + "Node name for S&R": "FromDetailerPipe" + } + }, + { + "id": 51, + "type": "SAMDetectorCombined", + "pos": [ + 5125, + 894 + ], + "size": { + "0": 315, + "1": 218 + }, + "flags": {}, + "order": 35, + "mode": 0, + "inputs": [ + { + "name": "sam_model", + "type": "SAM_MODEL", + "link": 83, + "label": "sam_model" + }, + { + "name": "segs", + "type": "SEGS", + "link": 87, + "label": "segs" + }, + { + "name": "image", + "type": "IMAGE", + "link": 205, + "label": "image" + } + ], + "outputs": [ + { + "name": "MASK", + "type": "MASK", + "links": [ + 88 + ], + "shape": 3, + "slot_index": 0, + "label": "MASK" + } + ], + "properties": { + "Node name for S&R": "SAMDetectorCombined" + }, + "widgets_values": [ + "center-1", + 0, + 0.93, + 0, + 0.7, + "False" + ] + }, + { + "id": 85, + "type": "SEGSToImageList", + "pos": [ + 5569.134812187498, + 1289.240372597656 + ], + "size": { + "0": 304.79998779296875, + "1": 46 + }, + "flags": {}, + "order": 37, + "mode": 0, + "inputs": [ + { + "name": "segs", + "type": "SEGS", + "link": 207, + "label": "segs" + }, + { + "name": "fallback_image_opt", + "type": "IMAGE", + "link": 208, + "label": "fallback_image_opt" + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 209 + ], + "shape": 6, + "slot_index": 0, + "label": "IMAGE" + } + ], + "properties": { + "Node name for S&R": "SEGSToImageList" + } + }, + { + "id": 86, + "type": "PreviewImage", + "pos": [ + 6910, + 1420 + ], + "size": { + "0": 409.85064697265625, + "1": 614.9011840820312 + }, + "flags": {}, + "order": 42, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 209, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 39, + "type": "ToDetailerPipe", + "pos": [ + 3167, + 631 + ], + "size": { + "0": 400, + "1": 260 + }, + "flags": {}, + "order": 23, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 61, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "link": 62, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "link": 65, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 66, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 67, + "label": "negative" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 68, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 69, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": 203, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 210 + ], + "shape": 3, + "slot_index": 0, + "label": "detailer_pipe" + } + ], + "properties": { + "Node name for S&R": "ToDetailerPipe" + }, + "widgets_values": [ + "", + "Select the LoRA to add to the text" + ] + }, + { + "id": 76, + "type": "FaceDetailerPipe", + "pos": [ + 3648, + 641 + ], + "size": { + "0": 347.608154296875, + "1": 1060.470947265625 + }, + "flags": {}, + "order": 24, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 184, + "label": "image" + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 210, + "label": "detailer_pipe" + } + ], + "outputs": [ + { + "name": "image", + "type": "IMAGE", + "links": [ + 186, + 188 + ], + "shape": 3, + "slot_index": 0, + "label": "image" + }, + { + "name": "cropped_refined", + "type": "IMAGE", + "links": [ + 187 + ], + "shape": 6, + "slot_index": 1, + "label": "cropped_refined" + }, + { + "name": "cropped_enhanced_alpha", + "type": "IMAGE", + "links": [ + 189 + ], + "shape": 6, + "slot_index": 2, + "label": "cropped_enhanced_alpha" + }, + { + "name": "mask", + "type": "MASK", + "links": [ + 190 + ], + "shape": 3, + "slot_index": 3, + "label": "mask" + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 191 + ], + "shape": 3, + "slot_index": 4, + "label": "detailer_pipe" + } + ], + "properties": { + "Node name for S&R": "FaceDetailerPipe" + }, + "widgets_values": [ + 256, + true, + 768, + 284739423125169, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + false, + 0.5, + 10, + 3, + "center-1", + 0, + 0.93, + 0, + 0.7, + "False", + 10 + ] + }, + { + "id": 49, + "type": "Reroute", + "pos": [ + 4967, + 568 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 17, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 211, + "label": "" + } + ], + "outputs": [ + { + "name": "", + "type": "IMAGE", + "links": [ + 137, + 153, + 159, + 194 + ], + "slot_index": 0, + "label": "" + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 27, + "type": "PreviewImage", + "pos": [ + 2590, + 920 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 18, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 180, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 74, + "type": "FaceDetailer", + "pos": [ + 2050, + 580 + ], + "size": { + "0": 372.5969543457031, + "1": 1103.0477294921875 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 169, + "label": "image" + }, + { + "name": "model", + "type": "MODEL", + "link": 170, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "link": 171, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "link": 172, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 175, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 176, + "slot_index": 5, + "label": "negative" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "link": 177, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "link": 178, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "link": 214, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "image", + "type": "IMAGE", + "links": [ + 179, + 211 + ], + "shape": 3, + "slot_index": 0, + "label": "image" + }, + { + "name": "cropped_refined", + "type": "IMAGE", + "links": [ + 180 + ], + "shape": 6, + "slot_index": 1, + "label": "cropped_refined" + }, + { + "name": "cropped_enhanced_alpha", + "type": "IMAGE", + "links": [ + 181 + ], + "shape": 6, + "slot_index": 2, + "label": "cropped_enhanced_alpha" + }, + { + "name": "mask", + "type": "MASK", + "links": [ + 182 + ], + "shape": 3, + "slot_index": 3, + "label": "mask" + }, + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "links": [ + 193 + ], + "shape": 3, + "slot_index": 4, + "label": "detailer_pipe" + } + ], + "properties": { + "Node name for S&R": "FaceDetailer" + }, + "widgets_values": [ + 256, + true, + 768, + 872368928997833, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + false, + 0.5, + 10, + 3, + "center-1", + 0, + 0.93, + 0, + 0.7, + "False", + 10, + "" + ] + }, + { + "id": 38, + "type": "FromDetailerPipe", + "pos": [ + 2740, + 630 + ], + "size": { + "0": 342.5999755859375, + "1": 186 + }, + "flags": {}, + "order": 21, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 193, + "label": "detailer_pipe" + } + ], + "outputs": [ + { + "name": "model", + "type": "MODEL", + "links": [ + 61 + ], + "shape": 3, + "slot_index": 0, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "links": [ + 62 + ], + "shape": 3, + "slot_index": 1, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "links": [ + 65 + ], + "shape": 3, + "slot_index": 2, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": [ + 66 + ], + "shape": 3, + "slot_index": 3, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": [ + 67 + ], + "shape": 3, + "slot_index": 4, + "label": "negative" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "links": [ + 68 + ], + "shape": 3, + "slot_index": 5, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "links": [ + 69 + ], + "shape": 3, + "slot_index": 6, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "links": [ + 203 + ], + "shape": 3, + "slot_index": 7, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "links": null, + "shape": 3, + "label": "detailer_hook" + } + ], + "properties": { + "Node name for S&R": "FromDetailerPipe" + } + }, + { + "id": 87, + "type": "UltralyticsDetectorProvider", + "pos": [ + 862, + 1445 + ], + "size": { + "0": 315, + "1": 78 + }, + "flags": {}, + "order": 3, + "mode": 0, + "outputs": [ + { + "name": "BBOX_DETECTOR", + "type": "BBOX_DETECTOR", + "links": [], + "shape": 3, + "slot_index": 0, + "label": "BBOX_DETECTOR" + }, + { + "name": "SEGM_DETECTOR", + "type": "SEGM_DETECTOR", + "links": [ + 212, + 213 + ], + "shape": 3, + "slot_index": 1, + "label": "SEGM_DETECTOR" + } + ], + "properties": { + "Node name for S&R": "UltralyticsDetectorProvider" + }, + "widgets_values": [ + "segm/person_yolov8m-seg.pt" + ] + }, + { + "id": 77, + "type": "Reroute", + "pos": [ + 3500, + 170 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 183, + "label": "" + } + ], + "outputs": [ + { + "name": "", + "type": "IMAGE", + "links": [ + 184, + 205, + 206, + 208 + ], + "slot_index": 0, + "label": "" + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 84, + "type": "SegmDetectorSEGS", + "pos": [ + 5130, + 1240 + ], + "size": { + "0": 315, + "1": 150 + }, + "flags": {}, + "order": 34, + "mode": 0, + "inputs": [ + { + "name": "segm_detector", + "type": "SEGM_DETECTOR", + "link": 204, + "label": "segm_detector" + }, + { + "name": "image", + "type": "IMAGE", + "link": 206, + "label": "image" + } + ], + "outputs": [ + { + "name": "SEGS", + "type": "SEGS", + "links": [ + 207 + ], + "shape": 3, + "slot_index": 0, + "label": "SEGS" + } + ], + "properties": { + "Node name for S&R": "SegmDetectorSEGS" + }, + "widgets_values": [ + 0.5, + 10, + 3, + 1 + ] + }, + { + "id": 34, + "type": "FromDetailerPipe", + "pos": [ + 1737, + -34 + ], + "size": { + "0": 342.5999755859375, + "1": 186 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "detailer_pipe", + "type": "DETAILER_PIPE", + "link": 36, + "label": "detailer_pipe" + } + ], + "outputs": [ + { + "name": "model", + "type": "MODEL", + "links": [ + 170 + ], + "shape": 3, + "slot_index": 0, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "links": [ + 171 + ], + "shape": 3, + "slot_index": 1, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "links": [ + 172 + ], + "shape": 3, + "slot_index": 2, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": [ + 175 + ], + "shape": 3, + "slot_index": 3, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": [ + 176 + ], + "shape": 3, + "slot_index": 4, + "label": "negative" + }, + { + "name": "bbox_detector", + "type": "BBOX_DETECTOR", + "links": [ + 177 + ], + "shape": 3, + "slot_index": 5, + "label": "bbox_detector" + }, + { + "name": "sam_model_opt", + "type": "SAM_MODEL", + "links": [ + 178 + ], + "shape": 3, + "slot_index": 6, + "label": "sam_model_opt" + }, + { + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR", + "links": [ + 214 + ], + "shape": 3, + "slot_index": 7, + "label": "segm_detector_opt" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "links": null, + "shape": 3, + "label": "detailer_hook" + } + ], + "properties": { + "Node name for S&R": "FromDetailerPipe" + } + }, + { + "id": 73, + "type": "DetailerForEachDebug", + "pos": [ + 5603, + 2282 + ], + "size": { + "0": 315, + "1": 678 + }, + "flags": {}, + "order": 36, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 159, + "label": "image" + }, + { + "name": "segs", + "type": "SEGS", + "link": 160, + "label": "segs" + }, + { + "name": "model", + "type": "MODEL", + "link": 161, + "label": "model" + }, + { + "name": "clip", + "type": "CLIP", + "link": 162, + "label": "clip" + }, + { + "name": "vae", + "type": "VAE", + "link": 163, + "label": "vae" + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 164, + "label": "positive" + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 165, + "label": "negative" + }, + { + "name": "detailer_hook", + "type": "DETAILER_HOOK", + "link": null, + "label": "detailer_hook" + } + ], + "outputs": [ + { + "name": "image", + "type": "IMAGE", + "links": [ + 166 + ], + "shape": 3, + "slot_index": 0, + "label": "image" + }, + { + "name": "cropped", + "type": "IMAGE", + "links": [ + 167 + ], + "shape": 6, + "slot_index": 1, + "label": "cropped" + }, + { + "name": "cropped_refined", + "type": "IMAGE", + "links": [ + 168 + ], + "shape": 6, + "slot_index": 2, + "label": "cropped_refined" + }, + { + "name": "cropped_refined_alpha", + "type": "IMAGE", + "links": null, + "shape": 6, + "label": "cropped_refined_alpha" + } + ], + "properties": { + "Node name for S&R": "DetailerForEachDebug" + }, + "widgets_values": [ + 256, + true, + 768, + 225176759887640, + "fixed", + 20, + 8, + "euler", + "normal", + 0.5, + 5, + true, + false, + "" + ] + }, + { + "id": 61, + "type": "PreviewImage", + "pos": [ + 6000, + 2450 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 40, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 167, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 62, + "type": "PreviewImage", + "pos": [ + 5990, + 2780 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 41, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 168, + "label": "images" + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 4, + "type": "CheckpointLoaderSimple", + "pos": [ + 26, + 474 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 4, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 1 + ], + "slot_index": 0, + "label": "MODEL" + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 3, + 5 + ], + "slot_index": 1, + "label": "CLIP" + }, + { + "name": "VAE", + "type": "VAE", + "links": [ + 8 + ], + "slot_index": 2, + "label": "VAE" + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "SD1.5/V07_v07.safetensors" + ] + }, + { + "id": 19, + "type": "## make-basic_pipe [2c8c61]", + "pos": [ + 502, + 860 + ], + "size": { + "0": 400, + "1": 200 + }, + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "vae_opt", + "type": "VAE", + "link": null, + "label": "vae_opt" + } + ], + "outputs": [ + { + "name": "BASIC_PIPE", + "type": "BASIC_PIPE", + "links": [ + 17, + 34 + ], + "shape": 3, + "slot_index": 0, + "label": "BASIC_PIPE" + } + ], + "title": "## make-basic_pipe", + "properties": { + "Node name for S&R": "## make-basic_pipe [2c8c61]" + }, + "widgets_values": [ + "SD1.5/V07_v07.safetensors", + "", + "text, watermark, worst quality:1.4, low quality:1.4" + ] + } + ], + "links": [ + [ + 1, + 4, + 0, + 3, + 0, + "MODEL" + ], + [ + 2, + 5, + 0, + 3, + 3, + "LATENT" + ], + [ + 3, + 4, + 1, + 6, + 0, + "CLIP" + ], + [ + 4, + 6, + 0, + 3, + 1, + "CONDITIONING" + ], + [ + 5, + 4, + 1, + 7, + 0, + "CLIP" + ], + [ + 6, + 7, + 0, + 3, + 2, + "CONDITIONING" + ], + [ + 7, + 3, + 0, + 8, + 0, + "LATENT" + ], + [ + 8, + 4, + 2, + 8, + 1, + "VAE" + ], + [ + 10, + 8, + 0, + 10, + 0, + "IMAGE" + ], + [ + 17, + 19, + 0, + 22, + 0, + "BASIC_PIPE" + ], + [ + 19, + 24, + 0, + 22, + 2, + "SAM_MODEL" + ], + [ + 33, + 24, + 0, + 32, + 2, + "SAM_MODEL" + ], + [ + 34, + 19, + 0, + 32, + 0, + "BASIC_PIPE" + ], + [ + 36, + 32, + 0, + 34, + 0, + "DETAILER_PIPE" + ], + [ + 59, + 36, + 0, + 37, + 0, + "IMAGE" + ], + [ + 61, + 38, + 0, + 39, + 0, + "MODEL" + ], + [ + 62, + 38, + 1, + 39, + 1, + "CLIP" + ], + [ + 65, + 38, + 2, + 39, + 2, + "VAE" + ], + [ + 66, + 38, + 3, + 39, + 3, + "CONDITIONING" + ], + [ + 67, + 38, + 4, + 39, + 4, + "CONDITIONING" + ], + [ + 68, + 38, + 5, + 39, + 5, + "BBOX_DETECTOR" + ], + [ + 69, + 38, + 6, + 39, + 6, + "SAM_MODEL" + ], + [ + 75, + 43, + 0, + 44, + 0, + "IMAGE" + ], + [ + 77, + 45, + 0, + 46, + 0, + "DETAILER_PIPE" + ], + [ + 82, + 45, + 0, + 50, + 0, + "DETAILER_PIPE" + ], + [ + 83, + 50, + 6, + 51, + 0, + "SAM_MODEL" + ], + [ + 85, + 50, + 5, + 52, + 0, + "BBOX_DETECTOR" + ], + [ + 87, + 52, + 0, + 51, + 1, + "SEGS" + ], + [ + 88, + 51, + 0, + 53, + 0, + "MASK" + ], + [ + 137, + 49, + 0, + 69, + 0, + "IMAGE" + ], + [ + 138, + 53, + 0, + 69, + 1, + "SEGS" + ], + [ + 139, + 50, + 0, + 69, + 2, + "MODEL" + ], + [ + 140, + 50, + 1, + 69, + 3, + "CLIP" + ], + [ + 141, + 50, + 2, + 69, + 4, + "VAE" + ], + [ + 142, + 50, + 3, + 69, + 5, + "CONDITIONING" + ], + [ + 143, + 50, + 4, + 69, + 6, + "CONDITIONING" + ], + [ + 144, + 69, + 0, + 57, + 0, + "IMAGE" + ], + [ + 153, + 49, + 0, + 72, + 0, + "IMAGE" + ], + [ + 154, + 53, + 0, + 72, + 1, + "SEGS" + ], + [ + 155, + 46, + 0, + 72, + 2, + "BASIC_PIPE" + ], + [ + 156, + 72, + 0, + 64, + 0, + "IMAGE" + ], + [ + 157, + 72, + 1, + 65, + 0, + "IMAGE" + ], + [ + 158, + 72, + 2, + 66, + 0, + "IMAGE" + ], + [ + 159, + 49, + 0, + 73, + 0, + "IMAGE" + ], + [ + 160, + 52, + 0, + 73, + 1, + "SEGS" + ], + [ + 161, + 50, + 0, + 73, + 2, + "MODEL" + ], + [ + 162, + 50, + 1, + 73, + 3, + "CLIP" + ], + [ + 163, + 50, + 2, + 73, + 4, + "VAE" + ], + [ + 164, + 50, + 3, + 73, + 5, + "CONDITIONING" + ], + [ + 165, + 50, + 4, + 73, + 6, + "CONDITIONING" + ], + [ + 166, + 73, + 0, + 60, + 0, + "IMAGE" + ], + [ + 167, + 73, + 1, + 61, + 0, + "IMAGE" + ], + [ + 168, + 73, + 2, + 62, + 0, + "IMAGE" + ], + [ + 169, + 10, + 0, + 74, + 0, + "IMAGE" + ], + [ + 170, + 34, + 0, + 74, + 1, + "MODEL" + ], + [ + 171, + 34, + 1, + 74, + 2, + "CLIP" + ], + [ + 172, + 34, + 2, + 74, + 3, + "VAE" + ], + [ + 175, + 34, + 3, + 74, + 4, + "CONDITIONING" + ], + [ + 176, + 34, + 4, + 74, + 5, + "CONDITIONING" + ], + [ + 177, + 34, + 5, + 74, + 6, + "BBOX_DETECTOR" + ], + [ + 178, + 34, + 6, + 74, + 7, + "SAM_MODEL" + ], + [ + 179, + 74, + 0, + 30, + 0, + "IMAGE" + ], + [ + 180, + 74, + 1, + 27, + 0, + "IMAGE" + ], + [ + 181, + 74, + 2, + 75, + 0, + "IMAGE" + ], + [ + 182, + 74, + 3, + 36, + 0, + "MASK" + ], + [ + 183, + 10, + 0, + 77, + 0, + "*" + ], + [ + 184, + 77, + 0, + 76, + 0, + "IMAGE" + ], + [ + 186, + 76, + 0, + 41, + 0, + "IMAGE" + ], + [ + 187, + 76, + 1, + 42, + 0, + "IMAGE" + ], + [ + 188, + 76, + 0, + 52, + 1, + "IMAGE" + ], + [ + 189, + 76, + 2, + 78, + 0, + "IMAGE" + ], + [ + 190, + 76, + 3, + 43, + 0, + "MASK" + ], + [ + 191, + 76, + 4, + 45, + 0, + "DETAILER_PIPE" + ], + [ + 193, + 74, + 4, + 38, + 0, + "DETAILER_PIPE" + ], + [ + 194, + 49, + 0, + 81, + 0, + "IMAGE" + ], + [ + 195, + 53, + 0, + 81, + 1, + "SEGS" + ], + [ + 196, + 46, + 0, + 81, + 2, + "BASIC_PIPE" + ], + [ + 197, + 81, + 0, + 54, + 0, + "IMAGE" + ], + [ + 200, + 72, + 3, + 82, + 0, + "IMAGE" + ], + [ + 201, + 83, + 0, + 22, + 1, + "BBOX_DETECTOR" + ], + [ + 202, + 83, + 0, + 32, + 1, + "BBOX_DETECTOR" + ], + [ + 203, + 38, + 7, + 39, + 7, + "SEGM_DETECTOR" + ], + [ + 204, + 50, + 7, + 84, + 0, + "SEGM_DETECTOR" + ], + [ + 205, + 77, + 0, + 51, + 2, + "IMAGE" + ], + [ + 206, + 77, + 0, + 84, + 1, + "IMAGE" + ], + [ + 207, + 84, + 0, + 85, + 0, + "SEGS" + ], + [ + 208, + 77, + 0, + 85, + 1, + "IMAGE" + ], + [ + 209, + 85, + 0, + 86, + 0, + "IMAGE" + ], + [ + 210, + 39, + 0, + 76, + 1, + "DETAILER_PIPE" + ], + [ + 211, + 74, + 0, + 49, + 0, + "*" + ], + [ + 212, + 87, + 1, + 22, + 3, + "SEGM_DETECTOR" + ], + [ + 213, + 87, + 1, + 32, + 3, + "SEGM_DETECTOR" + ], + [ + 214, + 34, + 7, + 74, + 8, + "SEGM_DETECTOR" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/test/loop-test.json b/custom_nodes/ComfyUI-Impact-Pack/test/loop-test.json new file mode 100644 index 0000000000000000000000000000000000000000..f1633943e455307b9a043b8f5b91a19c8ec02d3d --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/test/loop-test.json @@ -0,0 +1,1114 @@ +{ + "last_node_id": 43, + "last_link_id": 49, + "nodes": [ + { + "id": 7, + "type": "CLIPTextEncode", + "pos": [ + 413, + 389 + ], + "size": { + "0": 425.27801513671875, + "1": 180.6060791015625 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 5 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 6 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "text, watermark" + ] + }, + { + "id": 6, + "type": "CLIPTextEncode", + "pos": [ + 415, + 186 + ], + "size": { + "0": 422.84503173828125, + "1": 164.31304931640625 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 3 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 4 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "beautiful scenery nature glass bottle landscape, , purple galaxy bottle," + ] + }, + { + "id": 9, + "type": "SaveImage", + "pos": [ + 1451, + 189 + ], + "size": { + "0": 210, + "1": 270 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 9 + } + ], + "properties": {}, + "widgets_values": [ + "ComfyUI" + ] + }, + { + "id": 4, + "type": "CheckpointLoaderSimple", + "pos": [ + 26, + 474 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 1 + ], + "slot_index": 0 + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 3, + 5 + ], + "slot_index": 1 + }, + { + "name": "VAE", + "type": "VAE", + "links": [ + 8 + ], + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "V07_v07.safetensors" + ] + }, + { + "id": 8, + "type": "VAEDecode", + "pos": [ + 1209, + 188 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 7 + }, + { + "name": "vae", + "type": "VAE", + "link": 8 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 9, + 12 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + }, + { + "id": 19, + "type": "ImpactMinMax", + "pos": [ + 2480, + 1160 + ], + "size": { + "0": 210, + "1": 78 + }, + "flags": {}, + "order": 16, + "mode": 0, + "inputs": [ + { + "name": "a", + "type": "*", + "link": 24 + }, + { + "name": "b", + "type": "*", + "link": 25, + "slot_index": 1 + } + ], + "outputs": [ + { + "name": "INT", + "type": "INT", + "links": [ + 34 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactMinMax" + }, + "widgets_values": [ + false + ] + }, + { + "id": 15, + "type": "ImpactValueSender", + "pos": [ + 3520, + 1140 + ], + "size": { + "0": 210, + "1": 58 + }, + "flags": {}, + "order": 20, + "mode": 0, + "inputs": [ + { + "name": "value", + "type": "*", + "link": 39 + } + ], + "properties": { + "Node name for S&R": "ImpactValueSender" + }, + "widgets_values": [ + 0 + ] + }, + { + "id": 11, + "type": "ImageMaskSwitch", + "pos": [ + 1297, + 893 + ], + "size": { + "0": 315, + "1": 198 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "images1", + "type": "IMAGE", + "link": 12 + }, + { + "name": "mask1_opt", + "type": "MASK", + "link": null + }, + { + "name": "images2_opt", + "type": "IMAGE", + "link": 11 + }, + { + "name": "mask2_opt", + "type": "MASK", + "link": null + }, + { + "name": "images3_opt", + "type": "IMAGE", + "link": null + }, + { + "name": "mask3_opt", + "type": "MASK", + "link": null + }, + { + "name": "images4_opt", + "type": "IMAGE", + "link": null + }, + { + "name": "mask4_opt", + "type": "MASK", + "link": null + }, + { + "name": "select", + "type": "INT", + "link": 43, + "widget": { + "name": "select", + "config": [ + "INT", + { + "default": 1, + "min": 1, + "max": 4, + "step": 1 + } + ] + }, + "slot_index": 8 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 13 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "MASK", + "type": "MASK", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "ImageMaskSwitch" + }, + "widgets_values": [ + 1 + ] + }, + { + "id": 34, + "type": "ImpactConditionalBranch", + "pos": [ + 3264, + 1006 + ], + "size": { + "0": 210, + "1": 66 + }, + "flags": {}, + "order": 18, + "mode": 0, + "inputs": [ + { + "name": "cond", + "type": "BOOLEAN", + "link": 36, + "slot_index": 0 + }, + { + "name": "tt_value", + "type": "*", + "link": 37 + }, + { + "name": "ff_value", + "type": "*", + "link": 38 + } + ], + "outputs": [ + { + "name": "*", + "type": "*", + "links": [ + 39 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactConditionalBranch" + } + }, + { + "id": 33, + "type": "ImpactInt", + "pos": [ + 3010, + 930 + ], + "size": { + "0": 210, + "1": 58 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "INT", + "type": "INT", + "links": [ + 37 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactInt" + }, + "widgets_values": [ + 2 + ] + }, + { + "id": 35, + "type": "ImpactInt", + "pos": [ + 3000, + 1140 + ], + "size": { + "0": 210, + "1": 58 + }, + "flags": {}, + "order": 2, + "mode": 0, + "outputs": [ + { + "name": "INT", + "type": "INT", + "links": [ + 38 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactInt" + }, + "widgets_values": [ + 1 + ] + }, + { + "id": 5, + "type": "EmptyLatentImage", + "pos": [ + 473, + 609 + ], + "size": { + "0": 315, + "1": 106 + }, + "flags": {}, + "order": 3, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 2 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EmptyLatentImage" + }, + "widgets_values": [ + 256, + 256, + 1 + ] + }, + { + "id": 13, + "type": "ImageScaleBy", + "pos": [ + 1730, + 920 + ], + "size": { + "0": 210, + "1": 82 + }, + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 13 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 23, + 40 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImageScaleBy" + }, + "widgets_values": [ + "nearest-exact", + 1.2 + ] + }, + { + "id": 41, + "type": "ImpactConditionalStopIteration", + "pos": [ + 3607, + 774 + ], + "size": { + "0": 252, + "1": 26 + }, + "flags": {}, + "order": 21, + "mode": 0, + "inputs": [ + { + "name": "cond", + "type": "BOOLEAN", + "link": 49 + } + ], + "properties": { + "Node name for S&R": "ImpactConditionalStopIteration" + } + }, + { + "id": 32, + "type": "ImpactCompare", + "pos": [ + 2760, + 1040 + ], + "size": { + "0": 210, + "1": 78 + }, + "flags": {}, + "order": 17, + "mode": 0, + "inputs": [ + { + "name": "a", + "type": "*", + "link": 47 + }, + { + "name": "b", + "type": "*", + "link": 34, + "slot_index": 1 + } + ], + "outputs": [ + { + "name": "BOOLEAN", + "type": "BOOLEAN", + "links": [ + 36, + 48 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactCompare" + }, + "widgets_values": [ + "a > b" + ] + }, + { + "id": 43, + "type": "ImpactNeg", + "pos": [ + 3210.6906854687495, + 698.6871511123657 + ], + "size": { + "0": 210, + "1": 26 + }, + "flags": {}, + "order": 19, + "mode": 0, + "inputs": [ + { + "name": "value", + "type": "BOOLEAN", + "link": 48 + } + ], + "outputs": [ + { + "name": "BOOLEAN", + "type": "BOOLEAN", + "links": [ + 49 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactNeg" + } + }, + { + "id": 10, + "type": "ImageReceiver", + "pos": [ + 641, + 932 + ], + "size": { + "0": 315, + "1": 200 + }, + "flags": {}, + "order": 4, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 11 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "MASK", + "type": "MASK", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "ImageReceiver" + }, + "widgets_values": [ + "ImgSender_temp_vxhgs_00001_.png [temp]", + 0 + ] + }, + { + "id": 24, + "type": "ImpactImageInfo", + "pos": [ + 2077, + 1117 + ], + "size": { + "0": 210, + "1": 86 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "value", + "type": "IMAGE", + "link": 23 + } + ], + "outputs": [ + { + "name": "batch", + "type": "INT", + "links": null, + "shape": 3 + }, + { + "name": "height", + "type": "INT", + "links": [ + 24 + ], + "shape": 3, + "slot_index": 1 + }, + { + "name": "width", + "type": "INT", + "links": [ + 25 + ], + "shape": 3 + }, + { + "name": "channel", + "type": "INT", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "ImpactImageInfo" + } + }, + { + "id": 42, + "type": "ImpactInt", + "pos": [ + 2483, + 983 + ], + "size": { + "0": 210, + "1": 58 + }, + "flags": {}, + "order": 5, + "mode": 0, + "outputs": [ + { + "name": "INT", + "type": "INT", + "links": [ + 47 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImpactInt" + }, + "widgets_values": [ + 768 + ] + }, + { + "id": 39, + "type": "ImpactValueReceiver", + "pos": [ + 1021, + 1137 + ], + "size": { + "0": 210, + "1": 106 + }, + "flags": {}, + "order": 6, + "mode": 0, + "outputs": [ + { + "name": "*", + "type": "*", + "links": [ + 43 + ], + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "ImpactValueReceiver" + }, + "widgets_values": [ + "INT", + 1, + 0 + ] + }, + { + "id": 3, + "type": "KSampler", + "pos": [ + 872, + 217 + ], + "size": { + "0": 315, + "1": 474 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 1 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 4 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 6 + }, + { + "name": "latent_image", + "type": "LATENT", + "link": 2 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 7 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSampler" + }, + "widgets_values": [ + 901257808527154, + "fixed", + 5, + 8, + "euler", + "normal", + 1 + ] + }, + { + "id": 36, + "type": "ImageSender", + "pos": [ + 2046, + -116 + ], + "size": [ + 914.2697004627885, + 989.0802794506753 + ], + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 40 + } + ], + "properties": { + "Node name for S&R": "ImageSender" + }, + "widgets_values": [ + "ImgSender", + 0 + ] + } + ], + "links": [ + [ + 1, + 4, + 0, + 3, + 0, + "MODEL" + ], + [ + 2, + 5, + 0, + 3, + 3, + "LATENT" + ], + [ + 3, + 4, + 1, + 6, + 0, + "CLIP" + ], + [ + 4, + 6, + 0, + 3, + 1, + "CONDITIONING" + ], + [ + 5, + 4, + 1, + 7, + 0, + "CLIP" + ], + [ + 6, + 7, + 0, + 3, + 2, + "CONDITIONING" + ], + [ + 7, + 3, + 0, + 8, + 0, + "LATENT" + ], + [ + 8, + 4, + 2, + 8, + 1, + "VAE" + ], + [ + 9, + 8, + 0, + 9, + 0, + "IMAGE" + ], + [ + 11, + 10, + 0, + 11, + 2, + "IMAGE" + ], + [ + 12, + 8, + 0, + 11, + 0, + "IMAGE" + ], + [ + 13, + 11, + 0, + 13, + 0, + "IMAGE" + ], + [ + 23, + 13, + 0, + 24, + 0, + "IMAGE" + ], + [ + 24, + 24, + 1, + 19, + 0, + "*" + ], + [ + 25, + 24, + 2, + 19, + 1, + "*" + ], + [ + 34, + 19, + 0, + 32, + 1, + "*" + ], + [ + 36, + 32, + 0, + 34, + 0, + "BOOLEAN" + ], + [ + 37, + 33, + 0, + 34, + 1, + "*" + ], + [ + 38, + 35, + 0, + 34, + 2, + "*" + ], + [ + 39, + 34, + 0, + 15, + 0, + "*" + ], + [ + 40, + 13, + 0, + 36, + 0, + "IMAGE" + ], + [ + 43, + 39, + 0, + 11, + 8, + "INT" + ], + [ + 47, + 42, + 0, + 32, + 0, + "*" + ], + [ + 48, + 32, + 0, + 43, + 0, + "BOOLEAN" + ], + [ + 49, + 43, + 0, + 41, + 0, + "BOOLEAN" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/test/masks.json b/custom_nodes/ComfyUI-Impact-Pack/test/masks.json new file mode 100644 index 0000000000000000000000000000000000000000..da9c261f469a75c1513a8dfe2df9ce471dc01ebb --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/test/masks.json @@ -0,0 +1,622 @@ +{ + "last_node_id": 38, + "last_link_id": 52, + "nodes": [ + { + "id": 21, + "type": "SEGSToImageList", + "pos": [ + 2160, + 970 + ], + "size": { + "0": 304.79998779296875, + "1": 46 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "segs", + "type": "SEGS", + "link": 41 + }, + { + "name": "fallback_image_opt", + "type": "IMAGE", + "link": 26, + "slot_index": 1 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 27 + ], + "shape": 6, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "SEGSToImageList" + } + }, + { + "id": 5, + "type": "MaskToSEGS", + "pos": [ + 1520, + 980 + ], + "size": { + "0": 210, + "1": 130 + }, + "flags": {}, + "order": 4, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 5 + } + ], + "outputs": [ + { + "name": "SEGS", + "type": "SEGS", + "links": [ + 35, + 46 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "MaskToSEGS" + }, + "widgets_values": [ + "False", + 3, + "disabled", + 10 + ] + }, + { + "id": 36, + "type": "MasksToMaskList", + "pos": [ + 2270, + 680 + ], + "size": { + "0": 158.000244140625, + "1": 26 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "masks", + "type": "MASKS", + "link": 51 + } + ], + "outputs": [ + { + "name": "MASK", + "type": "MASK", + "links": [ + 52 + ], + "shape": 6, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "MasksToMaskList" + }, + "color": "#223", + "bgcolor": "#335" + }, + { + "id": 35, + "type": "MaskToImage", + "pos": [ + 2480, + 680 + ], + "size": { + "0": 176.39999389648438, + "1": 38.59991455078125 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 52 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 50 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "MaskToImage" + } + }, + { + "id": 28, + "type": "Segs & Mask ForEach", + "pos": [ + 1800, + 980 + ], + "size": { + "0": 243.60000610351562, + "1": 46 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "segs", + "type": "SEGS", + "link": 35, + "slot_index": 0 + }, + { + "name": "masks", + "type": "MASKS", + "link": 43 + } + ], + "outputs": [ + { + "name": "SEGS", + "type": "SEGS", + "links": [ + 41 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "Segs & Mask ForEach" + } + }, + { + "id": 22, + "type": "PreviewImage", + "pos": [ + 2510, + 970 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 27 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 4, + "type": "LoadImage", + "pos": [ + 1150, + 460 + ], + "size": { + "0": 315, + "1": 314 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 26, + 47 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "MASK", + "type": "MASK", + "links": [ + 5 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "clipspace/clipspace-mask-416378.30000000075.png [input]", + "image" + ] + }, + { + "id": 33, + "type": "SAMDetectorSegmented", + "pos": [ + 1740, + 310 + ], + "size": { + "0": 315, + "1": 218 + }, + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "sam_model", + "type": "SAM_MODEL", + "link": 45 + }, + { + "name": "segs", + "type": "SEGS", + "link": 46 + }, + { + "name": "image", + "type": "IMAGE", + "link": 47 + } + ], + "outputs": [ + { + "name": "combined_mask", + "type": "MASK", + "links": [ + 44 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "batch_masks", + "type": "MASKS", + "links": [ + 43, + 51 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "SAMDetectorSegmented" + }, + "widgets_values": [ + "center-1", + 0, + 0.7, + 0, + 0.7, + "False" + ] + }, + { + "id": 2, + "type": "SAMLoader", + "pos": [ + 1160, + 310 + ], + "size": { + "0": 315, + "1": 82 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "SAM_MODEL", + "type": "SAM_MODEL", + "links": [ + 45 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "SAMLoader" + }, + "widgets_values": [ + "sam_vit_b_01ec64.pth", + "AUTO" + ] + }, + { + "id": 6, + "type": "MaskToImage", + "pos": [ + 2300, + 310 + ], + "size": { + "0": 176.39999389648438, + "1": 26 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 44 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 8 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "MaskToImage" + } + }, + { + "id": 7, + "type": "PreviewImage", + "pos": [ + 2720, + 310 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 8 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 9, + "type": "PreviewImage", + "pos": [ + 2720, + 680 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 50 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 38, + "type": "Note", + "pos": [ + 2032, + 698 + ], + "size": [ + 210, + 81.49969482421875 + ], + "flags": {}, + "order": 2, + "mode": 0, + "properties": { + "text": "" + }, + "widgets_values": [ + "MasksToMaskList node introduced\n" + ], + "color": "#432", + "bgcolor": "#653" + }, + { + "id": 37, + "type": "Note", + "pos": [ + 2071, + 384 + ], + "size": [ + 281.500244140625, + 65.09967041015625 + ], + "flags": {}, + "order": 3, + "mode": 0, + "properties": { + "text": "" + }, + "widgets_values": [ + "type of batch_masks => MASKS instead of MASK\n" + ], + "color": "#432", + "bgcolor": "#653" + } + ], + "links": [ + [ + 5, + 4, + 1, + 5, + 0, + "MASK" + ], + [ + 8, + 6, + 0, + 7, + 0, + "IMAGE" + ], + [ + 26, + 4, + 0, + 21, + 1, + "IMAGE" + ], + [ + 27, + 21, + 0, + 22, + 0, + "IMAGE" + ], + [ + 35, + 5, + 0, + 28, + 0, + "SEGS" + ], + [ + 41, + 28, + 0, + 21, + 0, + "SEGS" + ], + [ + 43, + 33, + 1, + 28, + 1, + "MASKS" + ], + [ + 44, + 33, + 0, + 6, + 0, + "MASK" + ], + [ + 45, + 2, + 0, + 33, + 0, + "SAM_MODEL" + ], + [ + 46, + 5, + 0, + 33, + 1, + "SEGS" + ], + [ + 47, + 4, + 0, + 33, + 2, + "IMAGE" + ], + [ + 50, + 35, + 0, + 9, + 0, + "IMAGE" + ], + [ + 51, + 33, + 1, + 36, + 0, + "MASKS" + ], + [ + 52, + 36, + 0, + 35, + 0, + "MASK" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/test/regional_prompt.json b/custom_nodes/ComfyUI-Impact-Pack/test/regional_prompt.json new file mode 100644 index 0000000000000000000000000000000000000000..3864d5221c7a6b399f2e3278f2535b109eb1838c --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/test/regional_prompt.json @@ -0,0 +1,1625 @@ +{ + "last_node_id": 35, + "last_link_id": 65, + "nodes": [ + { + "id": 9, + "type": "EditBasicPipe", + "pos": [ + 1210, + 1030 + ], + "size": { + "0": 267, + "1": 126 + }, + "flags": {}, + "order": 16, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 60 + }, + { + "name": "model", + "type": "MODEL", + "link": null + }, + { + "name": "clip", + "type": "CLIP", + "link": null + }, + { + "name": "vae", + "type": "VAE", + "link": null + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 13 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": null + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 16 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EditBasicPipe" + } + }, + { + "id": 15, + "type": "LoadImage", + "pos": [ + -240, + 1710 + ], + "size": { + "0": 900, + "1": 900 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": null, + "shape": 3 + }, + { + "name": "MASK", + "type": "MASK", + "links": [ + 28 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "clipspace/clipspace-mask-1572044.0999999996.png [input]", + "image" + ] + }, + { + "id": 23, + "type": "LoadImage", + "pos": [ + -240, + 3790 + ], + "size": { + "0": 920, + "1": 910 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": null, + "shape": 3 + }, + { + "name": "MASK", + "type": "MASK", + "links": [ + 31 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "clipspace/clipspace-mask-1351518.png [input]", + "image" + ] + }, + { + "id": 26, + "type": "EditBasicPipe", + "pos": [ + 1240, + 4180 + ], + "size": { + "0": 178, + "1": 126 + }, + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 59 + }, + { + "name": "model", + "type": "MODEL", + "link": null + }, + { + "name": "clip", + "type": "CLIP", + "link": null + }, + { + "name": "vae", + "type": "VAE", + "link": null + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 34 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": null + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 33 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EditBasicPipe" + } + }, + { + "id": 17, + "type": "EditBasicPipe", + "pos": [ + 1550, + 1740 + ], + "size": { + "0": 178, + "1": 126 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 57 + }, + { + "name": "model", + "type": "MODEL", + "link": null + }, + { + "name": "clip", + "type": "CLIP", + "link": null + }, + { + "name": "vae", + "type": "VAE", + "link": null + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 21 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": null + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 24 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EditBasicPipe" + } + }, + { + "id": 7, + "type": "VAEDecode", + "pos": [ + 3660, + 1820 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 27, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 7 + }, + { + "name": "vae", + "type": "VAE", + "link": 63 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 9 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + }, + { + "id": 8, + "type": "PreviewImage", + "pos": [ + 4020, + 1450 + ], + "size": { + "0": 1069.308349609375, + "1": 1128.923828125 + }, + "flags": {}, + "order": 28, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 9 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 10, + "type": "CLIPTextEncode", + "pos": [ + 860, + 1110 + ], + "size": { + "0": 292.0009765625, + "1": 115.41679382324219 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 61 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 13 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, 1girl black hair, upper knee, (cafe:1.1)" + ] + }, + { + "id": 22, + "type": "CombineRegionalPrompts", + "pos": [ + 2810, + 1860 + ], + "size": { + "0": 287.20001220703125, + "1": 106 + }, + "flags": {}, + "order": 25, + "mode": 0, + "inputs": [ + { + "name": "regional_prompts1", + "type": "REGIONAL_PROMPTS", + "link": 48 + }, + { + "name": "regional_prompts2", + "type": "REGIONAL_PROMPTS", + "link": 49 + }, + { + "name": "regional_prompts3", + "type": "REGIONAL_PROMPTS", + "link": 50 + }, + { + "name": "regional_prompts4", + "type": "REGIONAL_PROMPTS", + "link": 64 + }, + { + "name": "regional_prompts5", + "type": "REGIONAL_PROMPTS", + "link": null + } + ], + "outputs": [ + { + "name": "REGIONAL_PROMPTS", + "type": "REGIONAL_PROMPTS", + "links": [ + 27 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CombineRegionalPrompts" + } + }, + { + "id": 12, + "type": "RegionalPrompt", + "pos": [ + 2030, + 1010 + ], + "size": { + "0": 418.1999816894531, + "1": 46 + }, + "flags": {}, + "order": 24, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 15 + }, + { + "name": "advanced_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 17 + } + ], + "outputs": [ + { + "name": "REGIONAL_PROMPTS", + "type": "REGIONAL_PROMPTS", + "links": [ + 48 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "RegionalPrompt" + } + }, + { + "id": 14, + "type": "EmptyLatentImage", + "pos": [ + 2740, + 1500 + ], + "size": { + "0": 350, + "1": 110 + }, + "flags": {}, + "order": 2, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 19 + ], + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "EmptyLatentImage" + }, + "widgets_values": [ + 768, + 1104, + 1 + ] + }, + { + "id": 27, + "type": "CLIPTextEncode", + "pos": [ + 830, + 4260 + ], + "size": [ + 338.8743232727047, + 117.87075195312445 + ], + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 37 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 34 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, 1girl yellow pencil skirt, upper knee, (cafe:1.1)" + ] + }, + { + "id": 25, + "type": "KSamplerAdvancedProvider", + "pos": [ + 1600, + 4180 + ], + "size": { + "0": 287.9136962890625, + "1": 106.45689392089844 + }, + "flags": {}, + "order": 17, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 33 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 32 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 8, + "dpm_fast", + "sgm_uniform" + ] + }, + { + "id": 13, + "type": "KSamplerAdvancedProvider", + "pos": [ + 1563, + 1030 + ], + "size": { + "0": 355.20001220703125, + "1": 106 + }, + "flags": {}, + "order": 20, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 16 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 17 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 8, + "dpm_fast", + "sgm_uniform" + ] + }, + { + "id": 2, + "type": "RegionalSampler", + "pos": [ + 3260, + 1820 + ], + "size": { + "0": 323.1692810058594, + "1": 597.25439453125 + }, + "flags": {}, + "order": 26, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 19, + "slot_index": 0 + }, + { + "name": "base_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 10 + }, + { + "name": "regional_prompts", + "type": "REGIONAL_PROMPTS", + "link": 27 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 7 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "RegionalSampler" + }, + "widgets_values": [ + 1019854126263754, + "randomize", + 30, + 1, + 5 + ] + }, + { + "id": 5, + "type": "## make-basic_pipe [2c8c61]", + "pos": [ + -2547, + 2236 + ], + "size": { + "0": 400, + "1": 200 + }, + "flags": {}, + "order": 3, + "mode": 0, + "inputs": [ + { + "name": "vae_opt", + "type": "VAE", + "link": null + } + ], + "outputs": [ + { + "name": "BASIC_PIPE", + "type": "BASIC_PIPE", + "links": [ + 1, + 3, + 62 + ], + "shape": 3, + "slot_index": 0 + } + ], + "title": "## make-basic_pipe", + "properties": { + "Node name for S&R": "## make-basic_pipe [2c8c61]" + }, + "widgets_values": [ + "SD1.5/epicrealism_naturalSinRC1VAE.safetensors", + "a photograph of a girl is standing in the cafe terrace, looking viewer, upper knee", + "big head, closeup" + ] + }, + { + "id": 1, + "type": "LoadImage", + "pos": [ + -260, + 778 + ], + "size": { + "0": 915.1032104492188, + "1": 860.6505126953125 + }, + "flags": {}, + "order": 4, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": null, + "shape": 3 + }, + { + "name": "MASK", + "type": "MASK", + "links": [ + 15 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "clipspace/clipspace-mask-1641138.7000000002.png [input]", + "image" + ] + }, + { + "id": 31, + "type": "CLIPTextEncode", + "pos": [ + 1230, + 2550 + ], + "size": { + "0": 292.0009765625, + "1": 115.41679382324219 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 56 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 51 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, 1girl, green tie, upper knee, (cafe:1.1)" + ] + }, + { + "id": 33, + "type": "KSamplerAdvancedProvider", + "pos": [ + 1890, + 2470 + ], + "size": { + "0": 305.4067687988281, + "1": 106 + }, + "flags": {}, + "order": 19, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 53 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 52 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 8, + "dpm_fast", + "sgm_uniform" + ] + }, + { + "id": 30, + "type": "EditBasicPipe", + "pos": [ + 1610, + 2480 + ], + "size": { + "0": 178, + "1": 126 + }, + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 58 + }, + { + "name": "model", + "type": "MODEL", + "link": null + }, + { + "name": "clip", + "type": "CLIP", + "link": null + }, + { + "name": "vae", + "type": "VAE", + "link": null + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 51 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": null + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 53 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EditBasicPipe" + } + }, + { + "id": 6, + "type": "FromBasicPipe", + "pos": [ + -1813, + 2226 + ], + "size": { + "0": 241.79998779296875, + "1": 106 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 3 + } + ], + "outputs": [ + { + "name": "model", + "type": "MODEL", + "links": null, + "shape": 3 + }, + { + "name": "clip", + "type": "CLIP", + "links": [ + 37 + ], + "shape": 3, + "slot_index": 1 + }, + { + "name": "vae", + "type": "VAE", + "links": [], + "shape": 3, + "slot_index": 2 + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": [], + "shape": 3, + "slot_index": 3 + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "FromBasicPipe" + } + }, + { + "id": 34, + "type": "FromBasicPipe_v2", + "pos": [ + 699, + 2163 + ], + "size": { + "0": 267, + "1": 126 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 62 + } + ], + "outputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "links": [ + 57, + 58, + 59, + 60 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "model", + "type": "MODEL", + "links": null, + "shape": 3 + }, + { + "name": "clip", + "type": "CLIP", + "links": [ + 55, + 56, + 61 + ], + "shape": 3, + "slot_index": 2 + }, + { + "name": "vae", + "type": "VAE", + "links": [ + 63 + ], + "shape": 3, + "slot_index": 3 + }, + { + "name": "positive", + "type": "CONDITIONING", + "links": null, + "shape": 3 + }, + { + "name": "negative", + "type": "CONDITIONING", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "FromBasicPipe_v2" + } + }, + { + "id": 20, + "type": "RegionalPrompt", + "pos": [ + 2230, + 1720 + ], + "size": { + "0": 278.79998779296875, + "1": 57.09715270996094 + }, + "flags": {}, + "order": 22, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 28 + }, + { + "name": "advanced_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 23 + } + ], + "outputs": [ + { + "name": "REGIONAL_PROMPTS", + "type": "REGIONAL_PROMPTS", + "links": [ + 49 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "RegionalPrompt" + } + }, + { + "id": 18, + "type": "CLIPTextEncode", + "pos": [ + 1180, + 1820 + ], + "size": { + "0": 292.0009765625, + "1": 115.41679382324219 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 55 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 21 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "photorealistic:1.4, 1girl pink jacket, upper knee, (cafe:1.1)" + ] + }, + { + "id": 32, + "type": "RegionalPrompt", + "pos": [ + 2280, + 2450 + ], + "size": { + "0": 278.79998779296875, + "1": 57.09715270996094 + }, + "flags": {}, + "order": 23, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 65 + }, + { + "name": "advanced_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 52 + } + ], + "outputs": [ + { + "name": "REGIONAL_PROMPTS", + "type": "REGIONAL_PROMPTS", + "links": [ + 64 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "RegionalPrompt" + } + }, + { + "id": 24, + "type": "RegionalPrompt", + "pos": [ + 2040, + 4160 + ], + "size": { + "0": 278.79998779296875, + "1": 47.54190444946289 + }, + "flags": {}, + "order": 21, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 31 + }, + { + "name": "advanced_sampler", + "type": "KSAMPLER_ADVANCED", + "link": 32 + } + ], + "outputs": [ + { + "name": "REGIONAL_PROMPTS", + "type": "REGIONAL_PROMPTS", + "links": [ + 50 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "RegionalPrompt" + } + }, + { + "id": 35, + "type": "LoadImage", + "pos": [ + -274, + 2727 + ], + "size": { + "0": 900, + "1": 900 + }, + "flags": {}, + "order": 5, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": null, + "shape": 3 + }, + { + "name": "MASK", + "type": "MASK", + "links": [ + 65 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "clipspace/clipspace-mask-1594007.5999999996.png [input]", + "image" + ] + }, + { + "id": 21, + "type": "KSamplerAdvancedProvider", + "pos": [ + 1840, + 1740 + ], + "size": { + "0": 305.4067687988281, + "1": 106 + }, + "flags": {}, + "order": 18, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 24 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 23 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 8, + "dpm_fast", + "sgm_uniform" + ] + }, + { + "id": 4, + "type": "KSamplerAdvancedProvider", + "pos": [ + 2742, + 1681 + ], + "size": { + "0": 355.20001220703125, + "1": 106 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "basic_pipe", + "type": "BASIC_PIPE", + "link": 1 + } + ], + "outputs": [ + { + "name": "KSAMPLER_ADVANCED", + "type": "KSAMPLER_ADVANCED", + "links": [ + 10 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvancedProvider" + }, + "widgets_values": [ + 5, + "dpm_fast", + "simple" + ] + } + ], + "links": [ + [ + 1, + 5, + 0, + 4, + 0, + "BASIC_PIPE" + ], + [ + 3, + 5, + 0, + 6, + 0, + "BASIC_PIPE" + ], + [ + 7, + 2, + 0, + 7, + 0, + "LATENT" + ], + [ + 9, + 7, + 0, + 8, + 0, + "IMAGE" + ], + [ + 10, + 4, + 0, + 2, + 1, + "KSAMPLER_ADVANCED" + ], + [ + 13, + 10, + 0, + 9, + 4, + "CONDITIONING" + ], + [ + 15, + 1, + 1, + 12, + 0, + "MASK" + ], + [ + 16, + 9, + 0, + 13, + 0, + "BASIC_PIPE" + ], + [ + 17, + 13, + 0, + 12, + 1, + "KSAMPLER_ADVANCED" + ], + [ + 19, + 14, + 0, + 2, + 0, + "LATENT" + ], + [ + 21, + 18, + 0, + 17, + 4, + "CONDITIONING" + ], + [ + 23, + 21, + 0, + 20, + 1, + "KSAMPLER_ADVANCED" + ], + [ + 24, + 17, + 0, + 21, + 0, + "BASIC_PIPE" + ], + [ + 27, + 22, + 0, + 2, + 2, + "REGIONAL_PROMPTS" + ], + [ + 28, + 15, + 1, + 20, + 0, + "MASK" + ], + [ + 31, + 23, + 1, + 24, + 0, + "MASK" + ], + [ + 32, + 25, + 0, + 24, + 1, + "KSAMPLER_ADVANCED" + ], + [ + 33, + 26, + 0, + 25, + 0, + "BASIC_PIPE" + ], + [ + 34, + 27, + 0, + 26, + 4, + "CONDITIONING" + ], + [ + 37, + 6, + 1, + 27, + 0, + "CLIP" + ], + [ + 48, + 12, + 0, + 22, + 0, + "REGIONAL_PROMPTS" + ], + [ + 49, + 20, + 0, + 22, + 1, + "REGIONAL_PROMPTS" + ], + [ + 50, + 24, + 0, + 22, + 2, + "REGIONAL_PROMPTS" + ], + [ + 51, + 31, + 0, + 30, + 4, + "CONDITIONING" + ], + [ + 52, + 33, + 0, + 32, + 1, + "KSAMPLER_ADVANCED" + ], + [ + 53, + 30, + 0, + 33, + 0, + "BASIC_PIPE" + ], + [ + 55, + 34, + 2, + 18, + 0, + "CLIP" + ], + [ + 56, + 34, + 2, + 31, + 0, + "CLIP" + ], + [ + 57, + 34, + 0, + 17, + 0, + "BASIC_PIPE" + ], + [ + 58, + 34, + 0, + 30, + 0, + "BASIC_PIPE" + ], + [ + 59, + 34, + 0, + 26, + 0, + "BASIC_PIPE" + ], + [ + 60, + 34, + 0, + 9, + 0, + "BASIC_PIPE" + ], + [ + 61, + 34, + 2, + 10, + 0, + "CLIP" + ], + [ + 62, + 5, + 0, + 34, + 0, + "BASIC_PIPE" + ], + [ + 63, + 34, + 3, + 7, + 1, + "VAE" + ], + [ + 64, + 32, + 0, + 22, + 3, + "REGIONAL_PROMPTS" + ], + [ + 65, + 35, + 1, + 32, + 0, + "MASK" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/TROUBLESHOOTING.md b/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/TROUBLESHOOTING.md new file mode 100644 index 0000000000000000000000000000000000000000..1284b1f29eaf2e1eb0a7309ea3816d5d2259a4c5 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/TROUBLESHOOTING.md @@ -0,0 +1,72 @@ +## When a permission error occurs during the installation process (on Windows) + +* There are cases where the package you are trying to install is already being used by another custom node that has been loaded. + * This issue occurs only on Windows. +* Please close ComfyUI and execute install.py directly using Python in the custom_nodes/ComfyUI-Impact-Pack directory. + * In case **portable** version: + 1. goto **ComfyUI_windows_portable** directory in **cmd** + 2. execute ```.\python_embeded\python -s -m custom_nodes\ComfyUI-Impact-Pack\install.py``` + * In case **venv**: + 1. activate venv + 2. execute ```python -s -m custom_nodes\ComfyUI-Impact-Pack\install.py``` + * Others: + 1. Please modify the path of 'python' according to your Python environment. + 2. execute ```(YOUR PYTHON) -s -m custom_nodes\ComfyUI-Impact-Pack\install.py``` + + +## If the nodes of the Impact Pack hang during execution + +* During the execution of processes related to dilation, issues like this may arise depending on the compatibility of the computer environment. +* Please set `disable_gpu_opencv = True` in the `ComfyUI-Impact-Pack/impact-pack.ini` file. Occasionally, issues may arise when the OpenCV GPU mode is activated depending on the environment. + + e.g. +``` +[default] +dependency_version = 17 +mmdet_skip = True +sam_editor_cpu = False +sam_editor_model = sam_vit_b_01ec64.pth +custom_wildcards = /home/me/github/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/custom_wildcards +disable_gpu_opencv = True +``` + +## An issue has occurred with importing Ultralytics. +``` + AttributeError: 'Logger' object has no attribute 'reconfigure' + + or + + AttributeError: 'Logger' object has no attribute 'encoding' +``` +* Update `ComfyUI-Manager` to V1.1.2 or above + + +## An issue has occurred about 'cv2' + +``` + AttributeError: module 'cv2' has no attribute 'setNumThreads' +``` + +* Update 'opencv-python' and 'opencv-python-headless' to latest version + * Once you update to the latest version, you can also downgrade back to 4.6.0.66 if needed. + * For the portable version, navigate to the portable installation directory in the command prompt, and enter the following command: + + ``` + .\python_embeded\python.exe -m pip install -U opencv-python opencv-python-headless + ``` + + * When using the WAS node suite or reactor nodes, using the latest version may not work as expected. You can downgrade using the following command: + + ``` + .\python_embeded\python.exe -m pip install -U opencv-python==4.6.0.66 opencv-python-headless==4.6.0.66 + ``` + + +## Destortion on Detailer + +* Please also note that this issue may be caused by a bug in xformers 0.0.18. If you encounter this problem, please try adjusting the guide_size parameter. + +![example](black1.png) + +![example](black2.png) +* guide_size changed from 256 -> 192 diff --git a/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/black1.png b/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/black1.png new file mode 100644 index 0000000000000000000000000000000000000000..aa9cd8c8abbffe8ae2e50618898dbfa169fe461b Binary files /dev/null and b/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/black1.png differ diff --git a/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/black2.png b/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/black2.png new file mode 100644 index 0000000000000000000000000000000000000000..b14f2c10151741deeb9bd84dd5f77e9613c5cfc0 Binary files /dev/null and b/custom_nodes/ComfyUI-Impact-Pack/troubleshooting/black2.png differ diff --git a/custom_nodes/ComfyUI-Impact-Pack/uninstall.py b/custom_nodes/ComfyUI-Impact-Pack/uninstall.py new file mode 100644 index 0000000000000000000000000000000000000000..2d62417c14128faca59ced13bbd83d5cd8708da3 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/uninstall.py @@ -0,0 +1,38 @@ +import os +import sys +import time +import platform +import shutil +import subprocess + +comfy_path = '../..' + +def rmtree(path): + retry_count = 3 + + while True: + try: + retry_count -= 1 + + if platform.system() == "Windows": + subprocess.check_call(['attrib', '-R', path + '\\*', '/S']) + + shutil.rmtree(path) + + return True + + except Exception as ex: + print(f"ex: {ex}") + time.sleep(3) + + if retry_count < 0: + raise ex + + print(f"Uninstall retry({retry_count})") + +js_dest_path = os.path.join(comfy_path, "web", "extensions", "impact-pack") + +if os.path.exists(js_dest_path): + rmtree(js_dest_path) + + diff --git a/custom_nodes/ComfyUI-Impact-Pack/wildcards/put_wildcards_here b/custom_nodes/ComfyUI-Impact-Pack/wildcards/put_wildcards_here new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI-Impact-Pack/wildcards/samples/flower.txt b/custom_nodes/ComfyUI-Impact-Pack/wildcards/samples/flower.txt new file mode 100644 index 0000000000000000000000000000000000000000..f8d0606f8de5a728a864d224a9ae0af4f77a9a7f --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/wildcards/samples/flower.txt @@ -0,0 +1,9 @@ +rose +orchid +iris +carnation +lily +daisy +chrysanthemum +daffodil +dahlia \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Impact-Pack/wildcards/samples/jewel.txt b/custom_nodes/ComfyUI-Impact-Pack/wildcards/samples/jewel.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a58330357dbf4fc6d94879cb449b87d04a88d51 --- /dev/null +++ b/custom_nodes/ComfyUI-Impact-Pack/wildcards/samples/jewel.txt @@ -0,0 +1,9 @@ +diamond +emerald +sapphire +opal +ruby +topaz +pearl +rubyamethyst +aquamarine \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/.cache/.cache_directory b/custom_nodes/ComfyUI-Manager/.cache/.cache_directory new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json b/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..7df26287a61b98f4340e73b5a7d72a5998f55632 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json @@ -0,0 +1,5643 @@ +{ + "custom_nodes": [ + { + "author": "Dr.Lt.Data", + "description": "ComfyUI-Manager itself is also a custom node.", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Manager" + ], + "install_type": "git-clone", + "reference": "https://github.com/ltdrdata/ComfyUI-Manager", + "title": "ComfyUI-Manager" + }, + { + "author": "Dr.Lt.Data", + "description": "This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler.\n[w/NOTE:'Segs & Mask' has been renamed to 'ImpactSegsAndMask.' Please replace the node with the new name.]", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Impact-Pack" + ], + "install_type": "git-clone", + "pip": [ + "ultralytics" + ], + "reference": "https://github.com/ltdrdata/ComfyUI-Impact-Pack", + "title": "ComfyUI Impact Pack" + }, + { + "author": "Dr.Lt.Data", + "description": "This extension provides various nodes to support Lora Block Weight and the Impact Pack. Provides many easily applicable regional features and applications for Variation Seed.", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Inspire-Pack" + ], + "install_type": "git-clone", + "nodename_pattern": "Inspire$", + "reference": "https://github.com/ltdrdata/ComfyUI-Inspire-Pack", + "title": "ComfyUI Inspire Pack" + }, + { + "author": "comfyanonymous", + "description": "Nodes: ModelSamplerTonemapNoiseTest, TonemapNoiseWithRescaleCFG, ReferenceOnlySimple, RescaleClassifierFreeGuidanceTest, ModelMergeBlockNumber, ModelMergeSDXL, ModelMergeSDXLTransformers, ModelMergeSDXLDetailedTransformers.[w/NOTE: This is a consolidation of the previously separate custom nodes. Please delete the sampler_tonemap.py, sampler_rescalecfg.py, advanced_model_merging.py, sdxl_model_merging.py, and reference_only.py files installed in custom_nodes before.]", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments" + ], + "install_type": "git-clone", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "title": "ComfyUI_experiments" + }, + { + "author": "Stability-AI", + "description": "Nodes: ColorBlend, ControlLoraSave, GetImageSize. NOTE: Control-LoRA recolor example uses these nodes.", + "files": [ + "https://github.com/Stability-AI/stability-ComfyUI-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Stability-AI/stability-ComfyUI-nodes", + "title": "stability-ComfyUI-nodes" + }, + { + "author": "Fannovel16", + "description": "This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by \ud83e\udd17. I think the old repo isn't good enough to maintain. All old workflow will still be work with this repo but the version option won't do anything. Almost all v1 preprocessors are replaced by v1.1 except those doesn't appear in v1.1. [w/NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition.]", + "files": [ + "https://github.com/Fannovel16/comfyui_controlnet_aux" + ], + "install_type": "git-clone", + "reference": "https://github.com/Fannovel16/comfyui_controlnet_aux", + "title": "ComfyUI's ControlNet Auxiliary Preprocessors" + }, + { + "author": "Fannovel16", + "description": "Nodes: KSampler Gradually Adding More Denoise (efficient)", + "files": [ + "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation" + ], + "install_type": "git-clone", + "reference": "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation", + "title": "ComfyUI Frame Interpolation" + }, + { + "author": "Fannovel16", + "description": "A collection of nodes which can be useful for animation in ComfyUI. The main focus of this extension is implementing a mechanism called loopchain. A loopchain in this case is the chain of nodes only executed repeatly in the workflow. If a node chain contains a loop node from this extension, it will become a loop chain.", + "files": [ + "https://github.com/Fannovel16/ComfyUI-Loopchain" + ], + "install_type": "git-clone", + "reference": "https://github.com/Fannovel16/ComfyUI-Loopchain", + "title": "ComfyUI Loopchain" + }, + { + "author": "Fannovel16", + "description": "Implementation of MDM, MotionDiffuse and ReMoDiffuse into ComfyUI.", + "files": [ + "https://github.com/Fannovel16/ComfyUI-MotionDiff" + ], + "install_type": "git-clone", + "reference": "https://github.com/Fannovel16/ComfyUI-MotionDiff", + "title": "ComfyUI MotionDiff" + }, + { + "author": "Fannovel16", + "description": "A minimalistic implementation of [a/Robust Video Matting (RVM)](https://github.com/PeterL1n/RobustVideoMatting/) in ComfyUI", + "files": [ + "https://github.com/Fannovel16/ComfyUI-Video-Matting" + ], + "install_type": "git-clone", + "reference": "https://github.com/Fannovel16/ComfyUI-Video-Matting", + "title": "ComfyUI-Video-Matting" + }, + { + "author": "biegert", + "description": "The CLIPSeg node generates a binary mask for a given input image and text prompt.", + "files": [ + "https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py" + ], + "install_type": "copy", + "reference": "https://github.com/biegert/ComfyUI-CLIPSeg", + "title": "CLIPSeg" + }, + { + "author": "BlenderNeko", + "description": "These custom nodes provides features that allow for better control over the effects of the text prompt.", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_Cutoff" + ], + "install_type": "git-clone", + "reference": "https://github.com/BlenderNeko/ComfyUI_Cutoff", + "title": "ComfyUI Cutoff" + }, + { + "author": "BlenderNeko", + "description": "Advanced CLIP Text Encode (if you need A1111 like prompt. you need this. But Cutoff node includes this feature, already.)", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb" + ], + "install_type": "git-clone", + "reference": "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb", + "title": "Advanced CLIP Text Encode" + }, + { + "author": "BlenderNeko", + "description": "This extension contains 6 nodes for ComfyUI that allows for more control and flexibility over the noise.", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_Noise" + ], + "install_type": "git-clone", + "reference": "https://github.com/BlenderNeko/ComfyUI_Noise", + "title": "ComfyUI Noise" + }, + { + "author": "BlenderNeko", + "description": "This extension contains a tiled sampler for ComfyUI. It allows for denoising larger images by splitting it up into smaller tiles and denoising these. It tries to minimize any seams for showing up in the end result by gradually denoising all tiles one step at the time and randomizing tile positions for every step.", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_TiledKSampler" + ], + "install_type": "git-clone", + "reference": "https://github.com/BlenderNeko/ComfyUI_TiledKSampler", + "title": "Tiled sampling for ComfyUI" + }, + { + "author": "BlenderNeko", + "description": "It provides the capability to generate CLIP from an image input, unlike unCLIP, which works in all models. (To use this extension, you need to download the required model file from **Install Models**)", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_SeeCoder" + ], + "install_type": "git-clone", + "reference": "https://github.com/BlenderNeko/ComfyUI_SeeCoder", + "title": "SeeCoder [WIP]" + }, + { + "author": "jags111", + "description": "A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count.[w/NOTE: This node is originally created by LucianoCirino, but the [a/original repository](https://github.com/LucianoCirino/efficiency-nodes-comfyui) is no longer maintained and has been forked by a new maintainer. To use the forked version, you should uninstall the original version and **REINSTALL** this one.]", + "files": [ + "https://github.com/jags111/efficiency-nodes-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/jags111/efficiency-nodes-comfyui", + "title": "Efficiency Nodes for ComfyUI Version 2.0+" + }, + { + "author": "jags111", + "description": "a collection of nodes to explore Vector and image manipulation", + "files": [ + "https://github.com/jags111/ComfyUI_Jags_VectorMagic" + ], + "install_type": "git-clone", + "reference": "https://github.com/jags111/ComfyUI_Jags_VectorMagic", + "title": "ComfyUI_Jags_VectorMagic" + }, + { + "author": "jags111", + "description": "This extension offers various audio generation tools", + "files": [ + "https://github.com/jags111/ComfyUI_Jags_Audiotools" + ], + "install_type": "git-clone", + "reference": "https://github.com/jags111/ComfyUI_Jags_Audiotools", + "title": "ComfyUI_Jags_Audiotools" + }, + { + "author": "Derfuu", + "description": "Automate calculation depending on image sizes or something you want.", + "files": [ + "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes", + "title": "Derfuu_ComfyUI_ModdedNodes" + }, + { + "apt_dependency": [ + "rustc", + "cargo" + ], + "author": "paulo-coronado", + "description": "CLIPTextEncodeBLIP: This custom node provides a CLIP Encoder that is capable of receiving images as input.", + "files": [ + "https://github.com/paulo-coronado/comfy_clip_blip_node" + ], + "install_type": "git-clone", + "reference": "https://github.com/paulo-coronado/comfy_clip_blip_node", + "title": "comfy_clip_blip_node" + }, + { + "author": "Davemane42", + "description": "This tool provides custom nodes that allow visualization and configuration of area conditioning and latent composite.", + "files": [ + "https://github.com/Davemane42/ComfyUI_Dave_CustomNode" + ], + "install_type": "git-clone", + "reference": "https://github.com/Davemane42/ComfyUI_Dave_CustomNode", + "title": "Visual Area Conditioning / Latent composition" + }, + { + "author": "WASasquatch", + "description": "A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more.", + "files": [ + "https://github.com/WASasquatch/was-node-suite-comfyui" + ], + "install_type": "git-clone", + "pip": [ + "numba" + ], + "reference": "https://github.com/WASasquatch/was-node-suite-comfyui", + "title": "WAS Node Suite" + }, + { + "author": "WASasquatch", + "description": "Nodes: ModelMergeByPreset. Merge checkpoint models by preset", + "files": [ + "https://github.com/WASasquatch/ComfyUI_Preset_Merger" + ], + "install_type": "git-clone", + "reference": "https://github.com/WASasquatch/ComfyUI_Preset_Merger", + "title": "ComfyUI Preset Merger" + }, + { + "author": "WASasquatch", + "description": "Nodes: WAS_PFN_Latent. Perlin Power Fractal Noisey Latents", + "files": [ + "https://github.com/WASasquatch/PPF_Noise_ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/WASasquatch/PPF_Noise_ComfyUI", + "title": "PPF_Noise_ComfyUI" + }, + { + "author": "WASasquatch", + "description": "Power Noise Suite contains nodes centered around latent noise input, and diffusion, as well as latent adjustments.", + "files": [ + "https://github.com/WASasquatch/PowerNoiseSuite" + ], + "install_type": "git-clone", + "reference": "https://github.com/WASasquatch/PowerNoiseSuite", + "title": "Power Noise Suite for ComfyUI" + }, + { + "author": "WASasquatch", + "description": "This custom node provides advanced settings for FreeU.", + "files": [ + "https://github.com/WASasquatch/FreeU_Advanced" + ], + "install_type": "git-clone", + "reference": "https://github.com/WASasquatch/FreeU_Advanced", + "title": "FreeU_Advanced" + }, + { + "author": "WASasquatch", + "description": "Abstract Syntax Trees Evaluated Restricted Run (ASTERR) is a Python Script executor for ComfyUI. [w/Warning:ASTERR runs Python Code from a Web Interface! It is highly recommended to run this in a closed-off environment, as it could have potential security risks.]", + "files": [ + "https://github.com/WASasquatch/ASTERR" + ], + "install_type": "git-clone", + "reference": "https://github.com/WASasquatch/ASTERR", + "title": "ASTERR" + }, + { + "author": "WASasquatch", + "description": "Nodes:Conditioning (Blend), Inpainting VAE Encode (WAS), VividSharpen. Experimental nodes, or other random extra helper nodes.", + "files": [ + "https://github.com/WASasquatch/WAS_Extras" + ], + "install_type": "git-clone", + "reference": "https://github.com/WASasquatch/WAS_Extras", + "title": "WAS_Extras" + }, + { + "author": "omar92", + "description": "openAI suite, String suite, Latent Tools, Image Tools: These custom nodes provide expanded functionality for image and string processing, latent processing, as well as the ability to interface with models such as ChatGPT/DallE-2.\nNOTE: Currently, this extension does not support the new OpenAI API, leading to compatibility issues.", + "files": [ + "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92" + ], + "install_type": "git-clone", + "reference": "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92", + "title": "Quality of life Suit:V2" + }, + { + "author": "lilly1987", + "description": "These custom nodes provides a feature to insert arbitrary inputs through wildcards in the prompt. Additionally, this tool provides features that help simplify workflows, such as VAELoaderDecoder and SimplerSample.", + "files": [ + "https://github.com/lilly1987/ComfyUI_node_Lilly" + ], + "install_type": "git-clone", + "reference": "https://github.com/lilly1987/ComfyUI_node_Lilly", + "title": "simple wildcard for ComfyUI" + }, + { + "author": "sylym", + "description": "A node suite for ComfyUI that allows you to load image sequence and generate new image sequence with different styles or content.", + "files": [ + "https://github.com/sylym/comfy_vid2vid" + ], + "install_type": "git-clone", + "reference": "https://github.com/sylym/comfy_vid2vid", + "title": "Vid2vid" + }, + { + "author": "EllangoK", + "description": "A collection of post processing nodes for ComfyUI, simply download this repo and drag.", + "files": [ + "https://github.com/EllangoK/ComfyUI-post-processing-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/EllangoK/ComfyUI-post-processing-nodes", + "title": "ComfyUI-post-processing-nodes" + }, + { + "author": "LEv145", + "description": "This tool provides a viewer node that allows for checking multiple outputs in a grid, similar to the X/Y Plot extension.", + "files": [ + "https://github.com/LEv145/images-grid-comfy-plugin" + ], + "install_type": "git-clone", + "reference": "https://github.com/LEv145/images-grid-comfy-plugin", + "title": "ImagesGrid" + }, + { + "author": "diontimmer", + "description": "Nodes: Pixel Sort, Swap Color Mode, Solid Color, Glitch This, Add Text To Image, Play Sound, Prettify Prompt, Generate Noise, Flatten Colors", + "files": [ + "https://github.com/diontimmer/ComfyUI-Vextra-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/diontimmer/ComfyUI-Vextra-Nodes", + "title": "ComfyUI-Vextra-Nodes" + }, + { + "author": "CYBERLOOM-INC", + "description": "Provide various custom nodes for Latent, Sampling, Model, Loader, Image, Text. This is the fixed version of the original [a/ComfyUI-nodes-hnmr](https://github.com/hnmr293/ComfyUI-nodes-hnmr) by hnmr293.", + "files": [ + "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr" + ], + "install_type": "git-clone", + "reference": "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr", + "title": "ComfyUI-nodes-hnmr" + }, + { + "author": "BadCafeCode", + "description": "This is a node pack for ComfyUI, primarily dealing with masks.", + "files": [ + "https://github.com/BadCafeCode/masquerade-nodes-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/BadCafeCode/masquerade-nodes-comfyui", + "title": "Masquerade Nodes" + }, + { + "author": "guoyk93", + "description": "Nodes: YKImagePadForOutpaint, YKMaskToImage", + "files": [ + "https://github.com/guoyk93/yk-node-suite-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/guoyk93/yk-node-suite-comfyui", + "title": "y.k.'s ComfyUI node suite" + }, + { + "author": "Jcd1230", + "description": "Nodes: Image Remove Background (rembg)", + "files": [ + "https://github.com/Jcd1230/rembg-comfyui-node" + ], + "install_type": "git-clone", + "reference": "https://github.com/Jcd1230/rembg-comfyui-node", + "title": "Rembg Background Removal Node for ComfyUI" + }, + { + "author": "YinBailiang", + "description": "Nodes: MergeBlockWeighted", + "files": [ + "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI", + "title": "MergeBlockWeighted_fo_ComfyUI" + }, + { + "author": "trojblue", + "description": "Nodes: image_layering, color_correction, model_router", + "files": [ + "https://github.com/trojblue/trNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/trojblue/trNodes", + "title": "trNodes" + }, + { + "author": "szhublox", + "description": "Auto-MBW for ComfyUI loosely based on sdweb-auto-MBW. Nodes: auto merge block weighted", + "files": [ + "https://github.com/szhublox/ambw_comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/szhublox/ambw_comfyui", + "title": "Auto-MBW" + }, + { + "author": "city96", + "description": "Run ComfyUI workflows on multiple local GPUs/networked machines. Nodes: Remote images, Local Remote control", + "files": [ + "https://github.com/city96/ComfyUI_NetDist" + ], + "install_type": "git-clone", + "reference": "https://github.com/city96/ComfyUI_NetDist", + "title": "ComfyUI_NetDist" + }, + { + "author": "city96", + "description": "Custom node to convert the lantents between SDXL and SD v1.5 directly without the VAE decoding/encoding step.", + "files": [ + "https://github.com/city96/SD-Latent-Interposer" + ], + "install_type": "git-clone", + "reference": "https://github.com/city96/SD-Latent-Interposer", + "title": "Latent-Interposer" + }, + { + "author": "city96", + "description": "Nodes: LatentGaussianNoise, MathEncode. An experimental custom node that generates latent noise directly by utilizing the linear characteristics of the latent space.", + "files": [ + "https://github.com/city96/SD-Advanced-Noise" + ], + "install_type": "git-clone", + "reference": "https://github.com/city96/SD-Advanced-Noise", + "title": "SD-Advanced-Noise" + }, + { + "author": "city96", + "description": "Upscaling stable diffusion latents using a small neural network.", + "files": [ + "https://github.com/city96/SD-Latent-Upscaler" + ], + "install_type": "git-clone", + "pip": [ + "huggingface-hub" + ], + "reference": "https://github.com/city96/SD-Latent-Upscaler", + "title": "SD-Latent-Upscaler" + }, + { + "author": "city96", + "description": "Testbed for [a/DiT(Scalable Diffusion Models with Transformers)](https://github.com/facebookresearch/DiT). [w/None of this code is stable, expect breaking changes if for some reason you want to use this.]", + "files": [ + "https://github.com/city96/ComfyUI_DiT" + ], + "install_type": "git-clone", + "pip": [ + "huggingface-hub" + ], + "reference": "https://github.com/city96/ComfyUI_DiT", + "title": "ComfyUI_DiT [WIP]" + }, + { + "author": "city96", + "description": "This extension currently has two sets of nodes - one set for editing the contrast/color of images and another set for saving images as 16 bit PNG files.", + "files": [ + "https://github.com/city96/ComfyUI_ColorMod" + ], + "install_type": "git-clone", + "reference": "https://github.com/city96/ComfyUI_ColorMod", + "title": "ComfyUI_ColorMod" + }, + { + "author": "city96", + "description": "This extension aims to add support for various random image diffusion models to ComfyUI.", + "files": [ + "https://github.com/city96/ComfyUI_ExtraModels" + ], + "install_type": "git-clone", + "reference": "https://github.com/city96/ComfyUI_ExtraModels", + "title": "Extra Models for ComfyUI" + }, + { + "author": "Kaharos94", + "description": "Save a picture as Webp file in Comfy + Workflow loading", + "files": [ + "https://github.com/Kaharos94/ComfyUI-Saveaswebp" + ], + "install_type": "git-clone", + "reference": "https://github.com/Kaharos94/ComfyUI-Saveaswebp", + "title": "ComfyUI-Saveaswebp" + }, + { + "author": "SLAPaper", + "description": "A custom node for ComfyUI, which can select one or some of images from a batch.", + "files": [ + "https://github.com/SLAPaper/ComfyUI-Image-Selector" + ], + "install_type": "git-clone", + "reference": "https://github.com/SLAPaper/ComfyUI-Image-Selector", + "title": "ComfyUI-Image-Selector" + }, + { + "author": "flyingshutter", + "description": "Manipulation nodes for Image, Latent", + "files": [ + "https://github.com/flyingshutter/As_ComfyUI_CustomNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/flyingshutter/As_ComfyUI_CustomNodes", + "title": "As_ComfyUI_CustomNodes" + }, + { + "author": "Zuellni", + "description": "Nodes: DeepFloyd, Filter, Select, Save, Decode, Encode, Repeat, Noise, Noise", + "files": [ + "https://github.com/Zuellni/ComfyUI-Custom-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Zuellni/ComfyUI-Custom-Nodes", + "title": "Zuellni/ComfyUI-Custom-Nodes" + }, + { + "author": "Zuellni", + "description": "Nodes: ExLlama Loader, ExLlama Generator.\nUsed to load 4-bit GPTQ Llama/2 models. You can find a lot of them over at [a/https://huggingface.co/TheBloke](https://huggingface.co/TheBloke)[w/NOTE: You need to manually install a pip package that suits your system. For example. If your system is 'Python3.10 + Windows + CUDA 11.8' then you need to install 'exllama-0.0.17+cu118-cp310-cp310-win_amd64.whl'. Available package files are [a/here](https://github.com/jllllll/exllama/releases)]", + "files": [ + "https://github.com/Zuellni/ComfyUI-ExLlama" + ], + "install_type": "git-clone", + "pip": [ + "sentencepiece", + "https://github.com/jllllll/exllama/releases/download/0.0.17/exllama-0.0.17+cu118-cp310-cp310-win_amd64.whl" + ], + "reference": "https://github.com/Zuellni/ComfyUI-ExLlama", + "title": "ComfyUI-ExLlama" + }, + { + "author": "Zuellni", + "description": "Image scoring nodes for ComfyUI using PickScore with a batch of images to predict which ones fit a given prompt the best.", + "files": [ + "https://github.com/Zuellni/ComfyUI-PickScore-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Zuellni/ComfyUI-PickScore-Nodes", + "title": "ComfyUI PickScore Nodes" + }, + { + "author": "AlekPet", + "description": "Nodes: PoseNode, PainterNode, TranslateTextNode, TranslateCLIPTextEncodeNode, DeepTranslatorTextNode, DeepTranslatorCLIPTextEncodeNode, ArgosTranslateTextNode, ArgosTranslateCLIPTextEncodeNode, PreviewTextNode.\n\nNOTE: Due to the dynamic nature of node name definitions, ComfyUI-Manager cannot recognize the node list from this extension. The Missing nodes and Badge features are not available for this extension.", + "files": [ + "https://github.com/AlekPet/ComfyUI_Custom_Nodes_AlekPet" + ], + "install_type": "git-clone", + "reference": "https://github.com/AlekPet/ComfyUI_Custom_Nodes_AlekPet", + "title": "AlekPet/ComfyUI_Custom_Nodes_AlekPet" + }, + { + "author": "pythongosssss", + "description": "A ComfyUI extension allowing the interrogation of booru tags from images.", + "files": [ + "https://github.com/pythongosssss/ComfyUI-WD14-Tagger" + ], + "install_type": "git-clone", + "reference": "https://github.com/pythongosssss/ComfyUI-WD14-Tagger", + "title": "ComfyUI WD 1.4 Tagger" + }, + { + "author": "pythongosssss", + "description": "This extension provides: Auto Arrange Graph, Workflow SVG, Favicon Status, Image Feed, Latent Upscale By, Lock Nodes & Groups, Lora Subfolders, Preset Text, Show Text, Touch Support, Link Render Mode, Locking, Node Finder, Quick Nodes, Show Image On Menu, Show Text, Workflow Managements, Custom Widget Default Values", + "files": [ + "https://github.com/pythongosssss/ComfyUI-Custom-Scripts" + ], + "install_type": "git-clone", + "reference": "https://github.com/pythongosssss/ComfyUI-Custom-Scripts", + "title": "pythongosssss/ComfyUI-Custom-Scripts" + }, + { + "author": "strimmlarn", + "description": "Nodes: CalculateAestheticScore, LoadAesteticModel, AesthetlcScoreSorter, ScoreToNumber", + "files": [ + "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score" + ], + "install_type": "git-clone", + "js_path": "strimmlarn", + "reference": "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score", + "title": "ComfyUI_Strimmlarns_aesthetic_score" + }, + { + "author": "tinyterra", + "description": "This extension offers various pipe nodes, fullscreen image viewer based on node history, dynamic widgets, interface customization, and more.", + "files": [ + "https://github.com/TinyTerra/ComfyUI_tinyterraNodes" + ], + "install_type": "git-clone", + "nodename_pattern": "^ttN ", + "reference": "https://github.com/tinyterra/ComfyUI_tinyterraNodes", + "title": "tinyterraNodes" + }, + { + "author": "Jordach", + "description": "Nodes: Plasma Noise, Random Noise, Greyscale Noise, Pink Noise, Brown Noise, Plasma KSampler", + "files": [ + "https://github.com/Jordach/comfy-plasma" + ], + "install_type": "git-clone", + "reference": "https://github.com/Jordach/comfy-plasma", + "title": "comfy-plasma" + }, + { + "author": "bvhari", + "description": "ComfyUI custom nodes to apply various image processing techniques.", + "files": [ + "https://github.com/bvhari/ComfyUI_ImageProcessing" + ], + "install_type": "git-clone", + "reference": "https://github.com/bvhari/ComfyUI_ImageProcessing", + "title": "ImageProcessing" + }, + { + "author": "bvhari", + "description": "ComfyUI custom node to convert latent to RGB.", + "files": [ + "https://github.com/bvhari/ComfyUI_LatentToRGB" + ], + "install_type": "git-clone", + "reference": "https://github.com/bvhari/ComfyUI_LatentToRGB", + "title": "LatentToRGB" + }, + { + "author": "bvhari", + "description": "A novel weighting scheme for token vectors from CLIP. Allows a wider range of values for the weight. Inspired by Perp-Neg.", + "files": [ + "https://github.com/bvhari/ComfyUI_PerpWeight" + ], + "install_type": "git-clone", + "reference": "https://github.com/bvhari/ComfyUI_PerpWeight", + "title": "ComfyUI_PerpWeight" + }, + { + "author": "ssitu", + "description": "ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A.", + "files": [ + "https://github.com/ssitu/ComfyUI_UltimateSDUpscale" + ], + "install_type": "git-clone", + "reference": "https://github.com/ssitu/ComfyUI_UltimateSDUpscale", + "title": "UltimateSDUpscale" + }, + { + "author": "ssitu", + "description": "This extension provides the ability to combine multiple nodes into a single node.", + "files": [ + "https://github.com/ssitu/ComfyUI_NestedNodeBuilder" + ], + "install_type": "git-clone", + "reference": "https://github.com/ssitu/ComfyUI_NestedNodeBuilder", + "title": "NestedNodeBuilder" + }, + { + "author": "ssitu", + "description": "Unofficial ComfyUI nodes for restart sampling based on the paper 'Restart Sampling for Improving Generative Processes' ([a/paper](https://arxiv.org/abs/2306.14878), [a/repo](https://github.com/Newbeeer/diffusion_restart_sampling))", + "files": [ + "https://github.com/ssitu/ComfyUI_restart_sampling" + ], + "install_type": "git-clone", + "reference": "https://github.com/ssitu/ComfyUI_restart_sampling", + "title": "Restart Sampling" + }, + { + "author": "ssitu", + "description": "ComfyUI nodes for the roop A1111 webui script.", + "files": [ + "https://github.com/ssitu/ComfyUI_roop" + ], + "install_type": "git-clone", + "reference": "https://github.com/ssitu/ComfyUI_roop", + "title": "ComfyUI roop" + }, + { + "author": "ssitu", + "description": "ComfyUI nodes based on the paper [a/FABRIC: Personalizing Diffusion Models with Iterative Feedback](https://arxiv.org/abs/2307.10159) (Feedback via Attention-Based Reference Image Conditioning)", + "files": [ + "https://github.com/ssitu/ComfyUI_fabric" + ], + "install_type": "git-clone", + "reference": "https://github.com/ssitu/ComfyUI_fabric", + "title": "ComfyUI fabric" + }, + { + "author": "space-nuko", + "description": "Modularized version of Disco Diffusion for use with ComfyUI.", + "files": [ + "https://github.com/space-nuko/ComfyUI-Disco-Diffusion" + ], + "install_type": "git-clone", + "reference": "https://github.com/space-nuko/ComfyUI-Disco-Diffusion", + "title": "Disco Diffusion" + }, + { + "author": "space-nuko", + "description": "A port of the openpose-editor extension for stable-diffusion-webui. NOTE: Requires [a/this ComfyUI patch](https://github.com/comfyanonymous/ComfyUI/pull/711) to work correctly", + "files": [ + "https://github.com/space-nuko/ComfyUI-OpenPose-Editor" + ], + "install_type": "git-clone", + "reference": "https://github.com/space-nuko/ComfyUI-OpenPose-Editor", + "title": "OpenPose Editor" + }, + { + "author": "space-nuko", + "description": "NODES: Dynamic Prompts Text Encode, Feeling Lucky Text Encode, Output String", + "files": [ + "https://github.com/space-nuko/nui-suite" + ], + "install_type": "git-clone", + "reference": "https://github.com/space-nuko/nui-suite", + "title": "nui suite" + }, + { + "author": "Nourepide", + "description": "Allor is a plugin for ComfyUI with an emphasis on transparency and performance.\n[w/NOTE: If you do not disable the default node override feature in the settings, the built-in nodes, namely ImageScale and ImageScaleBy nodes, will be disabled. (ref: [a/Configutation](https://github.com/Nourepide/ComfyUI-Allor#configuration))]", + "files": [ + "https://github.com/Nourepide/ComfyUI-Allor" + ], + "install_type": "git-clone", + "reference": "https://github.com/Nourepide/ComfyUI-Allor", + "title": "Allor Plugin" + }, + { + "author": "melMass", + "description": "NODES: Face Swap, Film Interpolation, Latent Lerp, Int To Number, Bounding Box, Crop, Uncrop, ImageBlur, Denoise, ImageCompare, RGV to HSV, HSV to RGB, Color Correct, Modulo, Deglaze Image, Smart Step, ...", + "files": [ + "https://github.com/melMass/comfy_mtb" + ], + "install_type": "git-clone", + "nodename_pattern": "\\(mtb\\)$", + "reference": "https://github.com/melMass/comfy_mtb", + "title": "MTB Nodes" + }, + { + "author": "xXAdonesXx", + "description": "Implementation of AutoGen inside ComfyUI. This repository is under development, and not everything is functioning correctly yet.", + "files": [ + "https://github.com/xXAdonesXx/NodeGPT" + ], + "install_type": "git-clone", + "reference": "https://github.com/xXAdonesXx/NodeGPT", + "title": "NodeGPT" + }, + { + "author": "Suzie1", + "description": "Custom nodes for SDXL and SD1.5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. NOTE: Maintainer is changed to Suzie1 from RockOfFire. [w/Using an outdated version has resulted in reported issues with updates not being applied. Trying to reinstall the software is advised.]", + "files": [ + "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes", + "title": "ComfyUI_Comfyroll_CustomNodes" + }, + { + "author": "bmad4ever", + "description": "ComfyUI extension that adds undo (and redo) functionality.", + "files": [ + "https://github.com/bmad4ever/ComfyUI-Bmad-DirtyUndoRedo" + ], + "install_type": "git-clone", + "reference": "https://github.com/bmad4ever/ComfyUI-Bmad-DirtyUndoRedo", + "title": "ComfyUI-Bmad-DirtyUndoRedo" + }, + { + "author": "bmad4ever", + "description": "This custom node offers the following functionalities: API support for setting up API requests, computer vision primarily for masking or collages, and general utility to streamline workflow setup or implement essential missing features.", + "files": [ + "https://github.com/bmad4ever/comfyui_bmad_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/bmad4ever/comfyui_bmad_nodes", + "title": "Bmad Nodes" + }, + { + "author": "bmad4ever", + "description": "Experimental sampler node. Sampling alternates between A and B inputs until only one remains, starting with A. B steps run over a 2x2 grid, where 3/4's of the grid are copies of the original input latent. When the optional mask is used, the region outside the defined roi is copied from the original latent at the end of every step.", + "files": [ + "https://github.com/bmad4ever/comfyui_ab_samplercustom" + ], + "install_type": "git-clone", + "reference": "https://github.com/bmad4ever/comfyui_ab_samplercustom", + "title": "comfyui_ab_sampler" + }, + { + "author": "bmad4ever", + "description": "Given a set of lists, the node adjusts them so that when used as input to another node all the possible argument permutations are computed.", + "files": [ + "https://github.com/bmad4ever/comfyui_lists_cartesian_product" + ], + "install_type": "git-clone", + "reference": "https://github.com/bmad4ever/comfyui_lists_cartesian_product", + "title": "Lists Cartesian Product" + }, + { + "author": "FizzleDorf", + "description": "Scheduled prompts, scheduled float/int values and wave function nodes for animations and utility. compatable with [a/framesync](https://www.framesync.xyz/) and [a/keyframe-string-generator](https://www.chigozie.co.uk/keyframe-string-generator/) for audio synced animations in Comfyui.", + "files": [ + "https://github.com/FizzleDorf/ComfyUI_FizzNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/FizzleDorf/ComfyUI_FizzNodes", + "title": "FizzNodes" + }, + { + "author": "FizzleDorf", + "description": "A ComfyUI implementation of Facebook Meta's [a/AITemplate](https://github.com/facebookincubator/AITemplate) repo for faster inference using cpp/cuda. This new repo is behind the old version but is a much more stable foundation to keep AIT online. Please be patient as the repo will eventually include the same features as before.\nNOTE: You can find the old AIT extension in the legacy channel.", + "files": [ + "https://github.com/FizzleDorf/ComfyUI-AIT" + ], + "install_type": "git-clone", + "reference": "https://github.com/FizzleDorf/ComfyUI-AIT", + "title": "ComfyUI-AIT" + }, + { + "author": "filipemeneses", + "description": "ComfyUI node that pixelizes images.", + "files": [ + "https://github.com/filipemeneses/comfy_pixelization" + ], + "install_type": "git-clone", + "reference": "https://github.com/filipemeneses/comfy_pixelization", + "title": "Pixelization" + }, + { + "author": "shiimizu", + "description": "NODES: CLIP Text Encode++. Achieve identical embeddings from stable-diffusion-webui for ComfyUI.", + "files": [ + "https://github.com/shiimizu/ComfyUI_smZNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/shiimizu/ComfyUI_smZNodes", + "title": "smZNodes" + }, + { + "author": "shiimizu", + "description": "The extension enables large image drawing & upscaling with limited VRAM via the following techniques:\n1.Two SOTA diffusion tiling algorithms: [a/Mixture of Diffusers](https://github.com/albarji/mixture-of-diffusers) and [a/MultiDiffusion](https://github.com/omerbt/MultiDiffusion)\n2.pkuliyi2015's Tiled VAE algorithm.", + "files": [ + "https://github.com/shiimizu/ComfyUI-TiledDiffusion" + ], + "install_type": "git-clone", + "reference": "https://github.com/shiimizu/ComfyUI-TiledDiffusion", + "title": "Tiled Diffusion & VAE for ComfyUI" + }, + { + "author": "ZaneA", + "description": "NODES: ImageRewardLoader, ImageRewardScore", + "files": [ + "https://github.com/ZaneA/ComfyUI-ImageReward" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZaneA/ComfyUI-ImageReward", + "title": "ImageReward" + }, + { + "author": "SeargeDP", + "description": "Custom nodes for easier use of SDXL in ComfyUI including an img2img workflow that utilizes both the base and refiner checkpoints.", + "files": [ + "https://github.com/SeargeDP/SeargeSDXL" + ], + "install_type": "git-clone", + "reference": "https://github.com/SeargeDP/SeargeSDXL", + "title": "SeargeSDXL" + }, + { + "author": "cubiq", + "description": "custom node for ComfyUI to perform simple math operations", + "files": [ + "https://github.com/cubiq/ComfyUI_SimpleMath" + ], + "install_type": "git-clone", + "reference": "https://github.com/cubiq/ComfyUI_SimpleMath", + "title": "Simple Math" + }, + { + "author": "cubiq", + "description": "ComfyUI reference implementation for IPAdapter models. The code is mostly taken from the original IPAdapter repository and laksjdjf's implementation, all credit goes to them. I just made the extension closer to ComfyUI philosophy.", + "files": [ + "https://github.com/cubiq/ComfyUI_IPAdapter_plus" + ], + "install_type": "git-clone", + "pip": [ + "insightface" + ], + "reference": "https://github.com/cubiq/ComfyUI_IPAdapter_plus", + "title": "ComfyUI_IPAdapter_plus" + }, + { + "author": "cubiq", + "description": "Native [a/InstantID](https://github.com/InstantID/InstantID) support for ComfyUI.\nThis extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI.\nPlease note this still could be considered beta stage, looking forward to your feedback.", + "files": [ + "https://github.com/cubiq/ComfyUI_InstantID" + ], + "install_type": "git-clone", + "reference": "https://github.com/cubiq/ComfyUI_InstantID", + "title": "ComfyUI InstantID (Native Support)" + }, + { + "author": "shockz0rz", + "description": "Nodes: Interpolate Poses, Interpolate Lineart, ... Custom nodes for interpolating between, well, everything in the Stable Diffusion ComfyUI.", + "files": [ + "https://github.com/shockz0rz/ComfyUI_InterpolateEverything" + ], + "install_type": "git-clone", + "reference": "https://github.com/shockz0rz/ComfyUI_InterpolateEverything", + "title": "InterpolateEverything" + }, + { + "author": "shockz0rz", + "description": "A set of custom nodes for creating image grids, sequences, and batches in ComfyUI.", + "files": [ + "https://github.com/shockz0rz/comfy-easy-grids" + ], + "install_type": "git-clone", + "reference": "https://github.com/shockz0rz/comfy-easy-grids", + "title": "comfy-easy-grids" + }, + { + "author": "yolanother", + "description": "Nodes: Prompt Agent, Prompt Agent (String). This script provides a prompt agent node for the Comfy UI stable diffusion client.", + "files": [ + "https://github.com/yolanother/DTAIComfyPromptAgent" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolanother/DTAIComfyPromptAgent", + "title": "Comfy UI Prompt Agent" + }, + { + "author": "yolanother", + "description": "Nodes: Image URL to Text, Image to Text.", + "files": [ + "https://github.com/yolanother/DTAIImageToTextNode" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolanother/DTAIImageToTextNode", + "title": "Image to Text Node" + }, + { + "author": "yolanother", + "description": "Nodes: Submit Image (Parameters), Submit Image. A collection of loaders that use a shared common online data source rather than relying on the files to be present locally.", + "files": [ + "https://github.com/yolanother/DTAIComfyLoaders" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolanother/DTAIComfyLoaders", + "title": "Comfy UI Online Loaders" + }, + { + "author": "yolanother", + "description": "A ComfyAI submit node to upload images to DoubTech.ai", + "files": [ + "https://github.com/yolanother/DTAIComfyImageSubmit" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolanother/DTAIComfyImageSubmit", + "title": "Comfy AI DoubTech.ai Image Sumission Node" + }, + { + "author": "yolanother", + "description": "This extension introduces QR code nodes for the Comfy UI stable diffusion client. NOTE: ComfyUI qrcode extension required.", + "files": [ + "https://github.com/yolanother/DTAIComfyQRCodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolanother/DTAIComfyQRCodes", + "title": "Comfy UI QR Codes" + }, + { + "author": "yolanother", + "description": "Nodes: String, Int, Float, Short String, CLIP Text Encode (With Variables), String Format, Short String Format. This extension introduces quality of life improvements by providing variable nodes and shared global variables.", + "files": [ + "https://github.com/yolanother/DTAIComfyVariables" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolanother/DTAIComfyVariables", + "title": "Variables for Comfy UI" + }, + { + "author": "sipherxyz", + "description": "Nodes: ImagesConcat, LoadImageFromUrl, AV_UploadImage", + "files": [ + "https://github.com/sipherxyz/comfyui-art-venture" + ], + "install_type": "git-clone", + "reference": "https://github.com/sipherxyz/comfyui-art-venture", + "title": "comfyui-art-venture" + }, + { + "author": "SOELexicon", + "description": "Nodes: MSSqlTableNode, MSSqlSelectNode. This extension provides custom nodes to interact with MSSQL.", + "files": [ + "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes", + "title": "LexMSDBNodes" + }, + { + "author": "pants007", + "description": "Nodes: Make Square Node, Interrogate Node, TextEncodeAIO", + "files": [ + "https://github.com/pants007/comfy-pants" + ], + "install_type": "git-clone", + "reference": "https://github.com/pants007/comfy-pants", + "title": "pants" + }, + { + "author": "evanspearman", + "description": "Provides Math Nodes for ComfyUI. Boolean Logic, Integer Arithmetic, Floating Point Arithmetic and Functions, Vec2, Vec3, and Vec4 Arithmetic and Functions", + "files": [ + "https://github.com/evanspearman/ComfyMath" + ], + "install_type": "git-clone", + "reference": "https://github.com/evanspearman/ComfyMath", + "title": "ComfyMath" + }, + { + "author": "civitai", + "description": "Nodes: CivitAI_Loaders. Load Checkpoints, and LORA models directly from CivitAI API.", + "files": [ + "https://github.com/civitai/comfy-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/civitai/comfy-nodes", + "title": "comfy-nodes" + }, + { + "author": "andersxa", + "description": "Nodes: CLIP Directional Prompt Attention Encode. Direction prompt attention tries to solve the problem of contextual words (or parts of the prompt) having an effect on much later or irrelevant parts of the prompt.", + "files": [ + "https://github.com/andersxa/comfyui-PromptAttention" + ], + "install_type": "git-clone", + "pip": [ + "scikit-learn", + "matplotlib" + ], + "reference": "https://github.com/andersxa/comfyui-PromptAttention", + "title": "CLIP Directional Prompt Attention" + }, + { + "author": "ArtVentureX", + "description": "AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff.\n[w/You only need to download one of [a/mm_sd_v14.ckpt](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt) | [a/mm_sd_v15.ckpt](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt). Put the model weights under %%ComfyUI/custom_nodes/comfyui-animatediff/models%%. DO NOT change model filename.]", + "files": [ + "https://github.com/ArtVentureX/comfyui-animatediff" + ], + "install_type": "git-clone", + "pip": [ + "flash_attn" + ], + "reference": "https://github.com/ArtVentureX/comfyui-animatediff", + "title": "AnimateDiff" + }, + { + "author": "twri", + "description": "SDXL Prompt Styler is a node that enables you to style prompts based on predefined templates stored in a JSON file.", + "files": [ + "https://github.com/twri/sdxl_prompt_styler" + ], + "install_type": "git-clone", + "reference": "https://github.com/twri/sdxl_prompt_styler", + "title": "SDXL Prompt Styler" + }, + { + "author": "wolfden", + "description": "These custom nodes provide a variety of customized prompt stylers based on [a/twri/SDXL Prompt Styler](https://github.com/twri/sdxl_prompt_styler).", + "files": [ + "https://github.com/wolfden/ComfyUi_PromptStylers" + ], + "install_type": "git-clone", + "reference": "https://github.com/wolfden/ComfyUi_PromptStylers", + "title": "SDXL Prompt Styler (customized version by wolfden)" + }, + { + "author": "wolfden", + "description": "This custom node provides the capability to manipulate multiple string inputs.", + "files": [ + "https://github.com/wolfden/ComfyUi_String_Function_Tree" + ], + "install_type": "git-clone", + "reference": "https://github.com/wolfden/ComfyUi_String_Function_Tree", + "title": "ComfyUi_String_Function_Tree" + }, + { + "author": "daxthin", + "description": "Face Detailer is a custom node for the 'ComfyUI' framework inspired by !After Detailer extension from auto1111, it allows you to detect faces using Mediapipe and YOLOv8n to create masks for the detected faces.", + "files": [ + "https://github.com/daxthin/DZ-FaceDetailer" + ], + "install_type": "git-clone", + "reference": "https://github.com/daxthin/DZ-FaceDetailer", + "title": "DZ-FaceDetailer" + }, + { + "author": "asagi4", + "description": "Nodes for convenient prompt editing. The aim is to make basic generations in ComfyUI completely prompt-controllable.", + "files": [ + "https://github.com/asagi4/comfyui-prompt-control" + ], + "install_type": "git-clone", + "reference": "https://github.com/asagi4/comfyui-prompt-control", + "title": "ComfyUI prompt control" + }, + { + "author": "asagi4", + "description": "Attempts to implement [a/CADS](https://arxiv.org/abs/2310.17347) for ComfyUI. Credit also to the [a/A1111 implementation](https://github.com/v0xie/sd-webui-cads/tree/main) that I used as a reference.", + "files": [ + "https://github.com/asagi4/ComfyUI-CADS" + ], + "install_type": "git-clone", + "reference": "https://github.com/asagi4/ComfyUI-CADS", + "title": "ComfyUI-CADS" + }, + { + "author": "asagi4", + "description": "Nodes:MUJinjaRender, MUSimpleWildcard", + "files": [ + "https://github.com/asagi4/comfyui-utility-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/asagi4/comfyui-utility-nodes", + "title": "asagi4/comfyui-utility-nodes" + }, + { + "author": "jamesWalker55", + "description": "Nodes: P2LDGAN. This integrates P2LDGAN into ComfyUI. P2LDGAN extracts lineart from input images.\n[w/To use this extension, you need to download the [a/p2ldgan model](https://drive.google.com/file/d/1To4V_Btc3QhCLBWZ0PdSNgC1cbm3isHP) and save it in the %%ComfyUI/custom_nodes/comfyui-p2ldgan/checkpoints%% directory.]", + "files": [ + "https://github.com/jamesWalker55/comfyui-p2ldgan" + ], + "install_type": "git-clone", + "reference": "https://github.com/jamesWalker55/comfyui-p2ldgan", + "title": "ComfyUI - P2LDGAN Node" + }, + { + "author": "jamesWalker55", + "description": "Nodes: JWInteger, JWFloat, JWString, JWImageLoadRGB, JWImageResize, ...", + "files": [ + "https://github.com/jamesWalker55/comfyui-various" + ], + "install_type": "git-clone", + "nodename_pattern": "^JW", + "reference": "https://github.com/jamesWalker55/comfyui-various", + "title": "Various ComfyUI Nodes by Type" + }, + { + "author": "adieyal", + "description": "Nodes: Random Prompts, Combinatorial Prompts, I'm Feeling Lucky, Magic Prompt, Jinja2 Templates. ComfyUI-DynamicPrompts is a custom nodes library that integrates into your existing ComfyUI Library. It provides nodes that enable the use of Dynamic Prompts in your ComfyUI.", + "files": [ + "https://github.com/adieyal/comfyui-dynamicprompts" + ], + "install_type": "git-clone", + "reference": "https://github.com/adieyal/comfyui-dynamicprompts", + "title": "DynamicPrompts Custom Nodes" + }, + { + "author": "mihaiiancu", + "description": "Nodes: InpaintMediapipe. This node provides a simple interface to inpaint.", + "files": [ + "https://github.com/mihaiiancu/ComfyUI_Inpaint" + ], + "install_type": "git-clone", + "reference": "https://github.com/mihaiiancu/ComfyUI_Inpaint", + "title": "mihaiiancu/Inpaint" + }, + { + "author": "kwaroran", + "description": "Nodes: Remove Image Background (abg). A Anime Background Remover node for comfyui, based on this hf space, works same as AGB extention in automatic1111.", + "files": [ + "https://github.com/kwaroran/abg-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/kwaroran/abg-comfyui", + "title": "abg-comfyui" + }, + { + "author": "bash-j", + "description": "Nodes: Prompt With Style, Prompt With SDXL, Resize Image for SDXL, Save Image With Prompt Data, HaldCLUT, Empty Latent Ratio Select/Custom SDXL", + "files": [ + "https://github.com/bash-j/mikey_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/bash-j/mikey_nodes", + "title": "Mikey Nodes" + }, + { + "author": "failfa.st", + "description": "node color customization, custom colors, dot reroutes, link rendering options, straight lines, group freezing, node pinning, automated arrangement of nodes, copy image", + "files": [ + "https://github.com/failfa-st/failfast-comfyui-extensions" + ], + "install_type": "git-clone", + "reference": "https://github.com/failfa-st/failfast-comfyui-extensions", + "title": "failfast-comfyui-extensions" + }, + { + "author": "Pfaeff", + "description": "Nodes: AstropulsePixelDetector, BackgroundRemover, ImagePadForBetterOutpaint, InpaintingPipelineLoader, Inpainting, ...", + "files": [ + "https://github.com/Pfaeff/pfaeff-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/Pfaeff/pfaeff-comfyui", + "title": "pfaeff-comfyui" + }, + { + "author": "wallish77", + "description": "Nodes: Checkpoint Loader with Name, Save Prompt Info, Outpaint to Image, CLIP Positive-Negative, SDXL Quick Empty Latent, Empty Latent by Ratio, Time String, SDXL Steps, SDXL Resolutions ...", + "files": [ + "https://github.com/wallish77/wlsh_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/wallish77/wlsh_nodes", + "title": "wlsh_nodes" + }, + { + "author": "Kosinkadink", + "description": "Nodes: ControlNetLoaderAdvanced, DiffControlNetLoaderAdvanced, ScaledSoftControlNetWeights, SoftControlNetWeights, CustomControlNetWeights, SoftT2IAdapterWeights, CustomT2IAdapterWeights", + "files": [ + "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet" + ], + "install_type": "git-clone", + "reference": "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet", + "title": "ComfyUI-Advanced-ControlNet" + }, + { + "author": "Kosinkadink", + "description": "A forked repository that actively maintains [a/AnimateDiff](https://github.com/ArtVentureX/comfyui-animatediff), created by ArtVentureX.\n\nImproved AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff.\n[w/Download one or more motion models from [a/Original Models](https://huggingface.co/guoyww/animatediff/tree/main) | [a/Finetuned Models](https://huggingface.co/manshoety/AD_Stabilized_Motion/tree/main). See README for additional model links and usage. Put the model weights under %%ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models%%. You are free to rename the models, but keeping original names will ease use when sharing your workflow.]", + "files": [ + "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved" + ], + "install_type": "git-clone", + "reference": "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved", + "title": "AnimateDiff Evolved" + }, + { + "author": "Kosinkadink", + "description": "Nodes: VHS_VideoCombine. Nodes related to video workflows", + "files": [ + "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite" + ], + "install_type": "git-clone", + "reference": "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite", + "title": "ComfyUI-VideoHelperSuite" + }, + { + "author": "Gourieff", + "description": "The Fast and Simple 'roop-like' Face Swap Extension Node for ComfyUI, based on ReActor (ex Roop-GE) SD-WebUI Face Swap Extension", + "files": [ + "https://github.com/Gourieff/comfyui-reactor-node" + ], + "install_type": "git-clone", + "reference": "https://github.com/Gourieff/comfyui-reactor-node", + "title": "ReActor Node for ComfyUI" + }, + { + "author": "imb101", + "description": "Nodes:FaceSwapNode. Very basic custom node to enable face swapping in ComfyUI. (roop)", + "files": [ + "https://github.com/imb101/ComfyUI-FaceSwap" + ], + "install_type": "git-clone", + "reference": "https://github.com/imb101/ComfyUI-FaceSwap", + "title": "FaceSwap" + }, + { + "author": "Chaoses-Ib", + "description": "Nodes: LoadImageFromPath. Load Image From Path loads the image from the source path and does not have such problems.", + "files": [ + "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes", + "title": "ComfyUI_Ib_CustomNodes" + }, + { + "author": "AIrjen", + "description": "One Button Prompt has a prompt generation node for beginners who have problems writing a good prompt, or advanced users who want to get inspired. It generates an entire prompt from scratch. It is random, but controlled. You simply load up the script and press generate, and let it surprise you.", + "files": [ + "https://github.com/AIrjen/OneButtonPrompt" + ], + "install_type": "git-clone", + "reference": "https://github.com/AIrjen/OneButtonPrompt", + "title": "One Button Prompt" + }, + { + "author": "coreyryanhanson", + "description": "QR generation within ComfyUI. Contains nodes suitable for workflows from generating basic QR images to techniques with advanced QR masking.", + "files": [ + "https://github.com/coreyryanhanson/ComfyQR" + ], + "install_type": "git-clone", + "reference": "https://github.com/coreyryanhanson/ComfyQR", + "title": "ComfyQR" + }, + { + "author": "coreyryanhanson", + "description": "A set of ComfyUI nodes to quickly test generated QR codes for scannability. A companion project to ComfyQR.", + "files": [ + "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes", + "title": "ComfyQR-scanning-nodes" + }, + { + "author": "dimtoneff", + "description": "This node manipulates the pixel art image in ways that it should look pixel perfect (downscales, changes palette, upscales etc.).", + "files": [ + "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector" + ], + "install_type": "git-clone", + "reference": "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector", + "title": "ComfyUI PixelArt Detector" + }, + { + "author": "dimtoneff", + "description": "Nodes: EagleImageNode", + "files": [ + "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo" + ], + "install_type": "git-clone", + "reference": "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo", + "title": "Eagle PNGInfo" + }, + { + "author": "theUpsider", + "description": "This extension allows users to load styles from a CSV file, primarily for migration purposes from the automatic1111 Stable Diffusion web UI.", + "files": [ + "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader" + ], + "install_type": "git-clone", + "reference": "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader", + "title": "Styles CSV Loader Extension for ComfyUI" + }, + { + "author": "M1kep", + "description": "Nodes: Range(Step), Range(Num Steps), List Length, Image Overlay, Stack Images, Empty Images, Join Image Lists, Join Float Lists. This extension provides various list manipulation nodes", + "files": [ + "https://github.com/M1kep/Comfy_KepListStuff" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/Comfy_KepListStuff", + "title": "Comfy_KepListStuff" + }, + { + "author": "M1kep", + "description": "Nodes: Int, Float, String, Operation, Checkpoint", + "files": [ + "https://github.com/M1kep/ComfyLiterals" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/ComfyLiterals", + "title": "ComfyLiterals" + }, + { + "author": "M1kep", + "description": "Nodes: Build Gif, Special CLIP Loader. It offers various manipulation capabilities for the internal operations of the prompt.", + "files": [ + "https://github.com/M1kep/KepPromptLang" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/KepPromptLang", + "title": "KepPromptLang" + }, + { + "author": "M1kep", + "description": "This extension provides a custom node that allows the use of [a/Matte Anything](https://github.com/hustvl/Matte-Anything) in ComfyUI.", + "files": [ + "https://github.com/M1kep/Comfy_KepMatteAnything" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/Comfy_KepMatteAnything", + "title": "Comfy_KepMatteAnything" + }, + { + "author": "M1kep", + "description": "Nodes: KepRotateImage", + "files": [ + "https://github.com/M1kep/Comfy_KepKitchenSink" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/Comfy_KepKitchenSink", + "title": "Comfy_KepKitchenSink" + }, + { + "author": "M1kep", + "description": "Nodes: TAESD VAE Decode", + "files": [ + "https://github.com/M1kep/ComfyUI-OtherVAEs" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/ComfyUI-OtherVAEs", + "title": "ComfyUI-OtherVAEs" + }, + { + "author": "M1kep", + "description": "ComfyUI-KepOpenAI is a user-friendly node that serves as an interface to the GPT-4 with Vision (GPT-4V) API. This integration facilitates the processing of images coupled with text prompts, leveraging the capabilities of the OpenAI API to generate text completions that are contextually relevant to the provided inputs.", + "files": [ + "https://github.com/M1kep/ComfyUI-KepOpenAI" + ], + "install_type": "git-clone", + "reference": "https://github.com/M1kep/ComfyUI-KepOpenAI", + "title": "ComfyUI-KepOpenAI" + }, + { + "author": "uarefans", + "description": "Nodes: Fans Styler (Max 10 Style), Fans Text Concat (Until 10 text).", + "files": [ + "https://github.com/uarefans/ComfyUI-Fans" + ], + "install_type": "git-clone", + "reference": "https://github.com/uarefans/ComfyUI-Fans", + "title": "ComfyUI-Fans" + }, + { + "author": "NicholasMcCarthy", + "description": "ComfyUI custom nodes to apply various latent travel techniques.", + "files": [ + "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite" + ], + "install_type": "git-clone", + "reference": "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite", + "title": "ComfyUI_TravelSuite" + }, + { + "author": "ManglerFTW", + "description": "A set of custom nodes to perform image 2 image functions in ComfyUI.", + "files": [ + "https://github.com/ManglerFTW/ComfyI2I" + ], + "install_type": "git-clone", + "reference": "https://github.com/ManglerFTW/ComfyI2I", + "title": "ComfyI2I" + }, + { + "author": "theUpsider", + "description": "An extension to ComfyUI that introduces logic nodes and conditional rendering capabilities.", + "files": [ + "https://github.com/theUpsider/ComfyUI-Logic" + ], + "install_type": "git-clone", + "reference": "https://github.com/theUpsider/ComfyUI-Logic", + "title": "ComfyUI-Logic" + }, + { + "author": "mpiquero7164", + "description": "Save a png or jpeg and option to save prompt/workflow in a text or json file for each image in Comfy + Workflow loading.", + "files": [ + "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt" + ], + "install_type": "git-clone", + "reference": "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt", + "title": "SaveImgPrompt" + }, + { + "author": "m-sokes", + "description": "Nodes: Empty Latent Randomizer (9 Inputs)", + "files": [ + "https://github.com/m-sokes/ComfyUI-Sokes-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/m-sokes/ComfyUI-Sokes-Nodes", + "title": "ComfyUI Sokes Nodes" + }, + { + "author": "Extraltodeus", + "description": "Nodes: NoisyLatentPerlin. This allows to create latent spaces filled with perlin-based noise that can actually be used by the samplers.", + "files": [ + "https://github.com/Extraltodeus/noise_latent_perlinpinpin" + ], + "install_type": "git-clone", + "reference": "https://github.com/Extraltodeus/noise_latent_perlinpinpin", + "title": "noise latent perlinpinpin" + }, + { + "author": "Extraltodeus", + "description": "Nodes:LoadLoraWithTags. Save/Load trigger words for loras from a json and auto fetch them on civitai if they are missing.", + "files": [ + "https://github.com/Extraltodeus/LoadLoraWithTags" + ], + "install_type": "git-clone", + "reference": "https://github.com/Extraltodeus/LoadLoraWithTags", + "title": "LoadLoraWithTags" + }, + { + "author": "Extraltodeus", + "description": "A few nodes to mix sigmas and a custom scheduler that uses phi, then one using eval() to be able to schedule with custom formulas.", + "files": [ + "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler" + ], + "install_type": "git-clone", + "reference": "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler", + "title": "sigmas_tools_and_the_golden_scheduler" + }, + { + "author": "Extraltodeus", + "description": "My own version 'from scratch' of a self-rescaling CFG. It isn't much but it's honest work.\nTLDR: set your CFG at 8 to try it. No burned images and artifacts anymore. CFG is also a bit more sensitive because it's a proportion around 8. Low scale like 4 also gives really nice results since your CFG is not the CFG anymore. Also in general even with relatively low settings it seems to improve the quality.", + "files": [ + "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG" + ], + "install_type": "git-clone", + "reference": "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG", + "title": "ComfyUI-AutomaticCFG" + }, + { + "author": "JPS", + "description": "Nodes: Various nodes to handle SDXL Resolutions, SDXL Basic Settings, IP Adapter Settings, Revision Settings, SDXL Prompt Styler, Crop Image to Square, Crop Image to Target Size, Get Date-Time String, Resolution Multiply, Largest Integer, 5-to-1 Switches for Integer, Images, Latents, Conditioning, Model, VAE, ControlNet", + "files": [ + "https://github.com/JPS-GER/ComfyUI_JPS-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/JPS-GER/ComfyUI_JPS-Nodes", + "title": "JPS Custom Nodes for ComfyUI" + }, + { + "author": "hustille", + "description": "ComfyUI nodes primarily for seed and filename generation", + "files": [ + "https://github.com/hustille/ComfyUI_hus_utils" + ], + "install_type": "git-clone", + "reference": "https://github.com/hustille/ComfyUI_hus_utils", + "title": "hus' utils for ComfyUI" + }, + { + "author": "hustille", + "description": "Nodes: KSampler With Refiner (Fooocus). The KSampler from [a/Fooocus](https://github.com/lllyasviel/Fooocus) as a ComfyUI node [w/NOTE: This patches basic ComfyUI behaviour - don't use together with other samplers. Or perhaps do? Other samplers might profit from those changes ... ymmv.]", + "files": [ + "https://github.com/hustille/ComfyUI_Fooocus_KSampler" + ], + "install_type": "git-clone", + "reference": "https://github.com/hustille/ComfyUI_Fooocus_KSampler", + "title": "ComfyUI_Fooocus_KSampler" + }, + { + "author": "badjeff", + "description": "A ComfyUI custom node to read LoRA tag(s) from text and load it into checkpoint model.", + "files": [ + "https://github.com/badjeff/comfyui_lora_tag_loader" + ], + "install_type": "git-clone", + "reference": "https://github.com/badjeff/comfyui_lora_tag_loader", + "title": "LoRA Tag Loader for ComfyUI" + }, + { + "author": "rgthree", + "description": "Nodes: Seed, Reroute, Context, Lora Loader Stack, Context Switch, Fast Muter. These custom nodes helps organize the building of complex workflows.", + "files": [ + "https://github.com/rgthree/rgthree-comfy" + ], + "install_type": "git-clone", + "nodename_pattern": " \\(rgthree\\)$", + "reference": "https://github.com/rgthree/rgthree-comfy", + "title": "rgthree's ComfyUI Nodes" + }, + { + "author": "AIGODLIKE", + "description": "It provides language settings. (Contribution from users of various languages is needed due to the support for each language.)", + "files": [ + "https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION" + ], + "install_type": "git-clone", + "reference": "https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION", + "title": "AIGODLIKE-COMFYUI-TRANSLATION" + }, + { + "author": "AIGODLIKE", + "description": "Improve the interactive experience of using ComfyUI, such as making the loading of ComfyUI models more intuitive and making it easier to create model thumbnails", + "files": [ + "https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Studio" + ], + "install_type": "git-clone", + "reference": "https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Studio", + "title": "AIGODLIKE-ComfyUI-Studio" + }, + { + "author": "syllebra", + "description": "Nodes: BilboX's PromptGeek Photo Prompt. This provides a convenient way to compose photorealistic prompts into ComfyUI.", + "files": [ + "https://github.com/syllebra/bilbox-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/syllebra/bilbox-comfyui", + "title": "BilboX's ComfyUI Custom Nodes" + }, + { + "author": "Girish Gopaul", + "description": "All the tools you need to save images with their generation metadata on ComfyUI. Compatible with Civitai & Prompthero geninfo auto-detection. Works with png, jpeg and webp.", + "files": [ + "https://github.com/giriss/comfy-image-saver" + ], + "install_type": "git-clone", + "reference": "https://github.com/giriss/comfy-image-saver", + "title": "Save Image with Generation Metadata" + }, + { + "author": "shingo1228", + "description": "Nodes:Send Webp Image to Eagle. This is an extension node for ComfyUI that allows you to send generated images in webp format to Eagle. This extension node is a re-implementation of the Eagle linkage functions of the previous ComfyUI-send-Eagle node, focusing on the functions required for this node.", + "files": [ + "https://github.com/shingo1228/ComfyUI-send-eagle-slim" + ], + "install_type": "git-clone", + "reference": "https://github.com/shingo1228/ComfyUI-send-eagle-slim", + "title": "ComfyUI-send-Eagle(slim)" + }, + { + "author": "shingo1228", + "description": "Nodes:SDXL Empty Latent Image. An extension node for ComfyUI that allows you to select a resolution from the pre-defined json files and output a Latent Image.", + "files": [ + "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage" + ], + "install_type": "git-clone", + "reference": "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage", + "title": "ComfyUI-SDXL-EmptyLatentImage" + }, + { + "author": "laksjdjf", + "description": "ComfyUI version of https://github.com/laksjdjf/pfg-webui. (To use this extension, you need to download the required model file from **Install Models**)", + "files": [ + "https://github.com/laksjdjf/pfg-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/laksjdjf/pfg-ComfyUI", + "title": "pfg-ComfyUI" + }, + { + "author": "laksjdjf", + "description": "Nodes:Attention couple. This is a custom node that manipulates region-specific prompts. While vanilla ComfyUI employs an area specification method based on latent couples, this node divides regions using attention layers within UNet.", + "files": [ + "https://github.com/laksjdjf/attention-couple-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/laksjdjf/attention-couple-ComfyUI", + "title": "attention-couple-ComfyUI" + }, + { + "author": "laksjdjf", + "description": "Nodes:Apply CDTuner, Apply Negapip. This extension provides the [a/CD(Color/Detail) Tuner](https://github.com/hako-mikan/sd-webui-cd-tuner) and the [a/Negative Prompt in the Prompt](https://github.com/hako-mikan/sd-webui-negpip) features.", + "files": [ + "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI", + "title": "cd-tuner_negpip-ComfyUI" + }, + { + "author": "laksjdjf", + "description": "Nodes:Load LoRA Weight Only, Load LoRA from Weight, Merge LoRA, Save LoRA. This extension provides nodes for merging LoRA.", + "files": [ + "https://github.com/laksjdjf/LoRA-Merger-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/laksjdjf/LoRA-Merger-ComfyUI", + "title": "LoRA-Merger-ComfyUI" + }, + { + "author": "laksjdjf", + "description": "This extension node is intended for the use of LCM conversion for SSD-1B-anime. It does not guarantee operation with the original LCM (as it cannot load weights in the current version). To take advantage of fast generation with LCM, a node for using TAESD as a decoder is also provided. This is inspired by ComfyUI-OtherVAEs.", + "files": [ + "https://github.com/laksjdjf/LCMSampler-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/laksjdjf/LCMSampler-ComfyUI", + "title": "LCMSampler-ComfyUI" + }, + { + "author": "alsritter", + "description": "Nodes:Asymmetric_Tiling_KSampler. ", + "files": [ + "https://github.com/alsritter/asymmetric-tiling-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/alsritter/asymmetric-tiling-comfyui", + "title": "asymmetric-tiling-comfyui" + }, + { + "author": "meap158", + "description": "Pause image generation when GPU temperature exceeds threshold.", + "files": [ + "https://github.com/meap158/ComfyUI-GPU-temperature-protection" + ], + "install_type": "git-clone", + "reference": "https://github.com/meap158/ComfyUI-GPU-temperature-protection", + "title": "GPU temperature protection" + }, + { + "author": "meap158", + "description": "Dynamic prompt expansion, powered by GPT-2 locally on your device.", + "files": [ + "https://github.com/meap158/ComfyUI-Prompt-Expansion" + ], + "install_type": "git-clone", + "reference": "https://github.com/meap158/ComfyUI-Prompt-Expansion", + "title": "ComfyUI-Prompt-Expansion" + }, + { + "author": "meap158", + "description": "Instantly replace your image's background.", + "files": [ + "https://github.com/meap158/ComfyUI-Background-Replacement" + ], + "install_type": "git-clone", + "reference": "https://github.com/meap158/ComfyUI-Background-Replacement", + "title": "ComfyUI-Background-Replacement" + }, + { + "author": "TeaCrab", + "description": "Nodes:TC_EqualizeCLAHE, TC_SizeApproximation, TC_ImageResize, TC_ImageScale, TC_ColorFill.", + "files": [ + "https://github.com/TeaCrab/ComfyUI-TeaNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/TeaCrab/ComfyUI-TeaNodes", + "title": "ComfyUI-TeaNodes" + }, + { + "author": "nagolinc", + "description": "Based off of: [a/Birch-san/diffusers-play/approx_vae](https://github.com/Birch-san/diffusers-play/tree/main/approx_vae). This ComfyUI node allows you to quickly preview SDXL 1.0 latents.", + "files": [ + "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL" + ], + "install_type": "git-clone", + "reference": "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL", + "title": "ComfyUI_FastVAEDecorder_SDXL" + }, + { + "author": "bradsec", + "description": "Nodes:ResolutionSelector", + "files": [ + "https://github.com/bradsec/ComfyUI_ResolutionSelector" + ], + "install_type": "git-clone", + "reference": "https://github.com/bradsec/ComfyUI_ResolutionSelector", + "title": "ResolutionSelector for ComfyUI" + }, + { + "author": "kohya-ss", + "description": "Nodes: LLLiteLoader", + "files": [ + "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI", + "title": "ControlNet-LLLite-ComfyUI" + }, + { + "author": "jjkramhoeft", + "description": "Nodes: SDXLRecommendedImageSize, JjkText, JjkShowText, JjkConcat. A set of custom nodes for ComfyUI - focused on text and parameter utility", + "files": [ + "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes", + "title": "ComfyUI-Jjk-Nodes" + }, + { + "author": "dagthomas", + "description": "Easy prompting for generation of endless random art pieces and photographs!", + "files": [ + "https://github.com/dagthomas/comfyui_dagthomas" + ], + "install_type": "git-clone", + "reference": "https://github.com/dagthomas/comfyui_dagthomas", + "title": "SDXL Auto Prompter" + }, + { + "author": "marhensa", + "description": "Input your desired output final resolution, it will automaticaly set the initial recommended SDXL ratio/size and its Upscale Factor to reach that output final resolution, also there's an option for 2x/4x reverse Upscale Factor. These all to avoid using bad/arbitary initial ratio/resolution.", + "files": [ + "https://github.com/marhensa/sdxl-recommended-res-calc" + ], + "install_type": "git-clone", + "reference": "https://github.com/marhensa/sdxl-recommended-res-calc", + "title": "Recommended Resolution Calculator" + }, + { + "author": "Nuked", + "description": "A suite of custom nodes for ConfyUI that includes GPT text-prompt generation, LoadVideo,SaveVideo,LoadFramesFromFolder and FrameInterpolator", + "files": [ + "https://github.com/Nuked88/ComfyUI-N-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Nuked88/ComfyUI-N-Nodes", + "title": "ComfyUI-N-Nodes" + }, + { + "author": "richinsley", + "description": "Nodes:LFO_Triangle, LFO_Sine, SawtoothNode, SquareNode, PulseNode. ComfyUI custom nodes to create Low Frequency Oscillators.", + "files": [ + "https://github.com/richinsley/Comfy-LFO" + ], + "install_type": "git-clone", + "reference": "https://github.com/richinsley/Comfy-LFO", + "title": "Comfy-LFO" + }, + { + "author": "Beinsezii", + "description": "This contains all-in-one 'principled' nodes for T2I, I2I, refining, and scaling. Additionally it has many tools for directly manipulating the color of latents, high res fix math, and scripted image post-processing.", + "files": [ + "https://github.com/Beinsezii/bsz-cui-extras" + ], + "install_type": "git-clone", + "reference": "https://github.com/Beinsezii/bsz-cui-extras", + "title": "bsz-cui-extras" + }, + { + "author": "youyegit", + "description": "Nodes:TdxhImageToSize, TdxhImageToSizeAdvanced, TdxhLoraLoader, TdxhIntInput, TdxhFloatInput, TdxhStringInput. Some nodes for stable diffusion comfyui. Sometimes it helps conveniently to use less nodes for doing the same things.", + "files": [ + "https://github.com/youyegit/tdxh_node_comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/youyegit/tdxh_node_comfyui", + "title": "tdxh_node_comfyui" + }, + { + "author": "Sxela", + "description": "Nodes:LoadFrameSequence, LoadFrame", + "files": [ + "https://github.com/Sxela/ComfyWarp" + ], + "install_type": "git-clone", + "reference": "https://github.com/Sxela/ComfyWarp", + "title": "ComfyWarp" + }, + { + "author": "skfoo", + "description": "Nodes:MultiLora Loader, Lora Text Extractor. Provides a node for assisting in loading loras through text.", + "files": [ + "https://github.com/skfoo/ComfyUI-Coziness" + ], + "install_type": "git-clone", + "reference": "https://github.com/skfoo/ComfyUI-Coziness", + "title": "ComfyUI-Coziness" + }, + { + "author": "YOUR-WORST-TACO", + "description": "Nodes:TacoLatent, TacoAnimatedLoader, TacoImg2ImgAnimatedLoader, TacoGifMaker.", + "files": [ + "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes", + "title": "ComfyUI-TacoNodes" + }, + { + "author": "Lerc", + "description": "This extension provides a full page image editor with mask support. There are two nodes, one to receive images from the editor and one to send images to the editor.", + "files": [ + "https://github.com/Lerc/canvas_tab" + ], + "install_type": "git-clone", + "reference": "https://github.com/Lerc/canvas_tab", + "title": "Canvas Tab" + }, + { + "author": "Ttl", + "description": "A custom ComfyUI node designed for rapid latent upscaling using a compact neural network, eliminating the need for VAE-based decoding and encoding.", + "files": [ + "https://github.com/Ttl/ComfyUi_NNLatentUpscale" + ], + "install_type": "git-clone", + "reference": "https://github.com/Ttl/ComfyUi_NNLatentUpscale", + "title": "ComfyUI Neural network latent upscale custom node" + }, + { + "author": "spro", + "description": "Nodes: Latent Mirror. Node to mirror a latent along the Y (vertical / left to right) or X (horizontal / top to bottom) axis.", + "files": [ + "https://github.com/spro/comfyui-mirror" + ], + "install_type": "git-clone", + "reference": "https://github.com/spro/comfyui-mirror", + "title": "Latent Mirror node for ComfyUI" + }, + { + "author": "Tropfchen", + "description": "Tired of forgetting and misspelling often weird names of embeddings you use? Or perhaps you use only one, cause you forgot you have tens of them installed?", + "files": [ + "https://github.com/Tropfchen/ComfyUI-Embedding_Picker" + ], + "install_type": "git-clone", + "reference": "https://github.com/Tropfchen/ComfyUI-Embedding_Picker", + "title": "Embedding Picker" + }, + { + "author": "Acly", + "description": "Nodes: Load Image (Base64), Load Mask (Base64), Send Image (WebSocket), Crop Image, Apply Mask to Image. Provides nodes geared towards using ComfyUI as a backend for external tools.\nNOTE: This extension is necessary when using an external tool like [comfyui-capture-inference](https://github.com/minux302/comfyui-capture-inference).", + "files": [ + "https://github.com/Acly/comfyui-tooling-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Acly/comfyui-tooling-nodes", + "title": "ComfyUI Nodes for External Tooling" + }, + { + "author": "Acly", + "description": "Experimental nodes for better inpainting with ComfyUI. Adds two nodes which allow using [a/Fooocus](https://github.com/Acly/comfyui-inpaint-nodes) inpaint model. It's a small and flexible patch which can be applied to any SDXL checkpoint and will transform it into an inpaint model. This model can then be used like other inpaint models, and provides the same benefits. [a/Read more](https://github.com/lllyasviel/Fooocus/discussions/414)", + "files": [ + "https://github.com/Acly/comfyui-inpaint-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Acly/comfyui-inpaint-nodes", + "title": "ComfyUI Inpaint Nodes" + }, + { + "author": "picturesonpictures", + "description": "A collection of custom nodes for ComfyUI. Includes a quick canny edge detection node with unconventional settings, simple LoRA stack nodes for workflow efficiency, and a customizable aspect ratio node.", + "files": [ + "https://github.com/picturesonpictures/comfy_PoP" + ], + "install_type": "git-clone", + "reference": "https://github.com/picturesonpictures/comfy_PoP", + "title": "comfy_PoP" + }, + { + "author": "Dream Project", + "description": "This extension offers various nodes that are useful for Deforum-like animations in ComfyUI.", + "files": [ + "https://github.com/alt-key-project/comfyui-dream-project" + ], + "install_type": "git-clone", + "reference": "https://github.com/alt-key-project/comfyui-dream-project", + "title": "Dream Project Animation Nodes" + }, + { + "author": "Dream Project", + "description": "Provide utilities for batch based video generation workflows (s.a. AnimateDiff and Stable Video Diffusion).", + "files": [ + "https://github.com/alt-key-project/comfyui-dream-video-batches" + ], + "install_type": "git-clone", + "reference": "https://github.com/alt-key-project/comfyui-dream-video-batches", + "title": "Dream Video Batches" + }, + { + "author": "seanlynch", + "description": "This package contains three nodes to help you compute optical flow between pairs of images, usually adjacent frames in a video, visualize the flow, and apply the flow to another image of the same dimensions. Most of the code is from Deforum, so this is released under the same license (MIT).", + "files": [ + "https://github.com/seanlynch/comfyui-optical-flow" + ], + "install_type": "git-clone", + "reference": "https://github.com/seanlynch/comfyui-optical-flow", + "title": "ComfyUI Optical Flow" + }, + { + "author": "ealkanat", + "description": "ComfyUI Easy Padding is a simple custom ComfyUI node that helps you to add padding to images on ComfyUI.", + "files": [ + "https://github.com/ealkanat/comfyui_easy_padding" + ], + "install_type": "git-clone", + "reference": "https://github.com/ealkanat/comfyui_easy_padding", + "title": "ComfyUI Easy Padding" + }, + { + "author": "ArtBot2023", + "description": "Character face swap with LoRA and embeddings.", + "files": [ + "https://github.com/ArtBot2023/CharacterFaceSwap" + ], + "install_type": "git-clone", + "reference": "https://github.com/ArtBot2023/CharacterFaceSwap", + "title": "Character Face Swap" + }, + { + "author": "mav-rik", + "description": "This is a copy of [a/facerestore custom node](https://civitai.com/models/24690/comfyui-facerestore-node) with a bit of a change to support CodeFormer Fidelity parameter. These ComfyUI nodes can be used to restore faces in images similar to the face restore option in AUTOMATIC1111 webui.\nNOTE: To use this node, you need to download the face restoration model and face detection model from the 'Install models' menu.", + "files": [ + "https://github.com/mav-rik/facerestore_cf" + ], + "install_type": "git-clone", + "reference": "https://github.com/mav-rik/facerestore_cf", + "title": "Facerestore CF (Code Former)" + }, + { + "author": "braintacles", + "description": "Nodes: CLIPTextEncodeSDXL-Multi-IO, CLIPTextEncodeSDXL-Pipe, Empty Latent Image from Aspect-Ratio, Random Find and Replace.", + "files": [ + "https://github.com/braintacles/braintacles-comfyui-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/braintacles/braintacles-comfyui-nodes", + "title": "braintacles-nodes" + }, + { + "author": "hayden-fr", + "description": "Manage models: browsing, download and delete.", + "files": [ + "https://github.com/hayden-fr/ComfyUI-Model-Manager" + ], + "install_type": "git-clone", + "reference": "https://github.com/hayden-fr/ComfyUI-Model-Manager", + "title": "ComfyUI-Model-Manager" + }, + { + "author": "hayden-fr", + "description": "Image Browsing: browsing, download and delete.", + "files": [ + "https://github.com/hayden-fr/ComfyUI-Image-Browsing" + ], + "install_type": "git-clone", + "reference": "https://github.com/hayden-fr/ComfyUI-Image-Browsing", + "title": "ComfyUI-Image-Browsing" + }, + { + "author": "ali1234", + "description": "Implements iteration over sequences within a single workflow run. [w/NOTE: This node replaces the execution of ComfyUI for iterative processing functionality.]", + "files": [ + "https://github.com/ali1234/comfyui-job-iterator" + ], + "install_type": "git-clone", + "reference": "https://github.com/ali1234/comfyui-job-iterator", + "title": "comfyui-job-iterator" + }, + { + "author": "jmkl", + "description": "ComfyUI custom user.css and some script stuff. mainly for web interface.", + "files": [ + "https://github.com/jmkl/ComfyUI-ricing" + ], + "install_type": "git-clone", + "reference": "https://github.com/jmkl/ComfyUI-ricing", + "title": "ComfyUI Ricing" + }, + { + "author": "budihartono", + "description": "Nodes: OTX Multiple Values, OTX KSampler Feeder. This extension provides custom nodes for ComfyUI created for personal projects. Made available for reference. Nodes may be updated or changed intermittently or not at all. Review & test before use.", + "files": [ + "https://github.com/budihartono/comfyui_otonx_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/budihartono/comfyui_otonx_nodes", + "title": "Otonx's Custom Nodes" + }, + { + "author": "ramyma", + "description": "Nodes: Base64Image Input Node, Base64Image Output Node. [a/A8R8](https://github.com/ramyma/a8r8) supporting nodes to integrate with ComfyUI", + "files": [ + "https://github.com/ramyma/A8R8_ComfyUI_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/ramyma/A8R8_ComfyUI_nodes", + "title": "A8R8 ComfyUI Nodes" + }, + { + "author": "spinagon", + "description": "Node for generating almost seamless textures, based on similar setting from A1111.", + "files": [ + "https://github.com/spinagon/ComfyUI-seamless-tiling" + ], + "install_type": "git-clone", + "reference": "https://github.com/spinagon/ComfyUI-seamless-tiling", + "title": "Seamless tiling Node for ComfyUI" + }, + { + "author": "BiffMunky", + "description": "A small set of nodes I created for various numerical and text inputs. Features image saver with ability to have JSON saved to separate folder, parameter collection nodes, two aesthetic scoring models, switches for text and numbers, and conversion of string to numeric and vice versa.", + "files": [ + "https://github.com/tusharbhutt/Endless-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/tusharbhutt/Endless-Nodes", + "title": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes" + }, + { + "author": "spacepxl", + "description": "Add Image Save nodes for TIFF 16 bit and EXR 32 bit formats. Probably only useful if you're applying a LUT or other color corrections, and care about preserving as much color accuracy as possible.", + "files": [ + "https://github.com/spacepxl/ComfyUI-HQ-Image-Save" + ], + "install_type": "git-clone", + "reference": "https://github.com/spacepxl/ComfyUI-HQ-Image-Save", + "title": "ComfyUI-HQ-Image-Save" + }, + { + "author": "spacepxl", + "description": "Image and matte filtering nodes for ComfyUI `image/filters/*`", + "files": [ + "https://github.com/spacepxl/ComfyUI-Image-Filters" + ], + "install_type": "git-clone", + "reference": "https://github.com/spacepxl/ComfyUI-Image-Filters", + "title": "ComfyUI-Image-Filters" + }, + { + "author": "spacepxl", + "description": "Unofficial ComfyUI implementation of [a/RAVE](https://rave-video.github.io/)", + "files": [ + "https://github.com/spacepxl/ComfyUI-RAVE" + ], + "install_type": "git-clone", + "reference": "https://github.com/spacepxl/ComfyUI-RAVE", + "title": "ComfyUI-RAVE" + }, + { + "author": "PTA", + "description": "A ComfyUI extension to apply better nodes layout algorithm to ComfyUI workflow (mostly for visualization purpose)", + "files": [ + "https://github.com/phineas-pta/comfyui-auto-nodes-layout" + ], + "install_type": "git-clone", + "reference": "https://github.com/phineas-pta/comfyui-auto-nodes-layout", + "title": "auto nodes layout" + }, + { + "author": "receyuki", + "description": "ComfyUI node version of the SD Prompt Reader.", + "files": [ + "https://github.com/receyuki/comfyui-prompt-reader-node" + ], + "install_type": "git-clone", + "reference": "https://github.com/receyuki/comfyui-prompt-reader-node", + "title": "comfyui-prompt-reader-node" + }, + { + "author": "rklaffehn", + "description": "Nodes: RK_CivitAIMetaChecker, RK_CivitAIAddHashes.", + "files": [ + "https://github.com/rklaffehn/rk-comfy-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/rklaffehn/rk-comfy-nodes", + "title": "rk-comfy-nodes" + }, + { + "author": "cubiq", + "description": "Essential nodes that are weirdly missing from ComfyUI core. With few exceptions they are new features and not commodities. I hope this will be just a temporary repository until the nodes get included into ComfyUI.", + "files": [ + "https://github.com/cubiq/ComfyUI_essentials" + ], + "install_type": "git-clone", + "reference": "https://github.com/cubiq/ComfyUI_essentials", + "title": "ComfyUI Essentials" + }, + { + "author": "Clybius", + "description": "Nodes: Latent Diffusion Mega Modifier. ComfyUI nodes which modify the latent during the diffusion process. (Sharpness, Tonemap, Rescale, Extra Noise)", + "files": [ + "https://github.com/Clybius/ComfyUI-Latent-Modifiers" + ], + "install_type": "git-clone", + "reference": "https://github.com/Clybius/ComfyUI-Latent-Modifiers", + "title": "ComfyUI-Latent-Modifiers" + }, + { + "author": "Clybius", + "description": "Nodes: SamplerCustomNoise, SamplerCustomNoiseDuo, SamplerCustomModelMixtureDuo, SamplerRES_Momentumized, SamplerDPMPP_DualSDE_Momentumized, SamplerCLYB_4M_SDE_Momentumized, SamplerTTM, SamplerLCMCustom\nThis extension provides various custom samplers not offered by the default nodes in ComfyUI.", + "files": [ + "https://github.com/Clybius/ComfyUI-Extra-Samplers" + ], + "install_type": "git-clone", + "reference": "https://github.com/Clybius/ComfyUI-Extra-Samplers", + "title": "ComfyUI Extra Samplers" + }, + { + "author": "mcmonkeyprojects", + "description": "Extension for StableSwarmUI, ComfyUI, and AUTOMATIC1111 Stable Diffusion WebUI that enables a way to use higher CFG Scales without color issues. This works by clamping latents between steps.", + "files": [ + "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding" + ], + "install_type": "git-clone", + "reference": "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding", + "title": "Stable Diffusion Dynamic Thresholding (CFG Scale Fix)" + }, + { + "author": "Tropfchen", + "description": "A slightly different Resolution Selector node, allowing to freely change base resolution and aspect ratio, with options to maintain the pixel count or use the base resolution as the highest or lowest dimension.", + "files": [ + "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector" + ], + "install_type": "git-clone", + "reference": "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector", + "title": "YARS: Yet Another Resolution Selector" + }, + { + "author": "chrisgoringe", + "description": "Adds KSampler custom nodes with variation seed and variation strength.", + "files": [ + "https://github.com/chrisgoringe/cg-noise" + ], + "install_type": "git-clone", + "reference": "https://github.com/chrisgoringe/cg-noise", + "title": "Variation seeds" + }, + { + "author": "chrisgoringe", + "description": "A custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow.", + "files": [ + "https://github.com/chrisgoringe/cg-image-picker" + ], + "install_type": "git-clone", + "reference": "https://github.com/chrisgoringe/cg-image-picker", + "title": "Image chooser" + }, + { + "author": "chrisgoringe", + "description": "A set of nodes that allow data to be 'broadcast' to some or all unconnected inputs. Greatly reduces link spaghetti.", + "files": [ + "https://github.com/chrisgoringe/cg-use-everywhere" + ], + "install_type": "git-clone", + "nodename_pattern": "(^(Prompts|Anything) Everywhere|Simple String)", + "reference": "https://github.com/chrisgoringe/cg-use-everywhere", + "title": "Use Everywhere (UE Nodes)" + }, + { + "author": "chrisgoringe", + "description": "Prompt Info", + "files": [ + "https://github.com/chrisgoringe/cg-prompt-info" + ], + "install_type": "git-clone", + "reference": "https://github.com/chrisgoringe/cg-prompt-info", + "title": "Prompt Info" + }, + { + "author": "TGu-97", + "description": "Nodes: MPN Switch, MPN Reroute, PN Switch. This is a set of custom nodes for ComfyUI. Mainly focus on control switches.", + "files": [ + "https://github.com/TGu-97/ComfyUI-TGu-utils" + ], + "install_type": "git-clone", + "reference": "https://github.com/TGu-97/ComfyUI-TGu-utils", + "title": "TGu Utilities" + }, + { + "author": "seanlynch", + "description": "Nodes: SRL Conditional Interrupt, SRL Format String, SRL Eval, SRL Filter Image List. This is a collection of nodes I find useful. Note that at least one module allows execution of arbitrary code. Do not use any of these nodes on a system that allow untrusted users to control workflows or inputs.[w/WARNING: The custom nodes in this extension are vulnerable to **security risks** because they allow the execution of arbitrary code through the workflow]", + "files": [ + "https://github.com/seanlynch/srl-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/seanlynch/srl-nodes", + "title": "SRL's nodes" + }, + { + "author": "alpertunga-bile", + "description": "Custom AI prompt generator node for ComfyUI.", + "files": [ + "https://github.com/alpertunga-bile/prompt-generator-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/alpertunga-bile/prompt-generator-comfyui", + "title": "prompt-generator" + }, + { + "author": "mlinmg", + "description": "A LaMa prerocessor for ComfyUI. This preprocessor finally enable users to generate coherent inpaint and outpaint prompt-free. The best results are given on landscapes, not so much in drawings/animation.", + "files": [ + "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor" + ], + "install_type": "git-clone", + "reference": "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor", + "title": "LaMa Preprocessor [WIP]" + }, + { + "author": "kijai", + "description": "Various quality of life -nodes for ComfyUI, mostly just visual stuff to improve usability.", + "files": [ + "https://github.com/kijai/ComfyUI-KJNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/kijai/ComfyUI-KJNodes", + "title": "KJNodes for ComfyUI" + }, + { + "author": "kijai", + "description": "ComfyUI- CCSR upscaler node", + "files": [ + "https://github.com/kijai/ComfyUI-CCSR" + ], + "install_type": "git-clone", + "reference": "https://github.com/kijai/ComfyUI-CCSR", + "title": "ComfyUI-CCSR" + }, + { + "author": "kijai", + "description": "Preliminary use of SVD in ComfyUI.\nNOTE: Quick Implementation, Unstable. See details on repositories.", + "files": [ + "https://github.com/kijai/ComfyUI-SVD" + ], + "install_type": "git-clone", + "reference": "https://github.com/kijai/ComfyUI-SVD", + "title": "ComfyUI-SVD" + }, + { + "author": "kijai", + "description": "This is a wrapper node for Marigold depth estimation: [https://github.com/prs-eth/Marigold](https://github.com/kijai/ComfyUI-Marigold). Currently using the same diffusers pipeline as in the original implementation, so in addition to the custom node, you need the model in diffusers format.\nNOTE: See details in repo to install.", + "files": [ + "https://github.com/kijai/ComfyUI-Marigold" + ], + "install_type": "git-clone", + "reference": "https://github.com/kijai/ComfyUI-Marigold", + "title": "Marigold depth estimation in ComfyUI" + }, + { + "author": "kijai", + "description": "Node to use [a/DDColor](https://github.com/piddnad/DDColor) in ComfyUI.", + "files": [ + "https://github.com/kijai/ComfyUI-DDColor" + ], + "install_type": "git-clone", + "reference": "https://github.com/kijai/ComfyUI-DDColor", + "title": "ComfyUI-DDColor" + }, + { + "author": "Kijai", + "description": "This is a trainer for AnimateDiff MotionLoRAs, based on the implementation of MotionDirector by ExponentialML.", + "files": [ + "https://github.com/kijai/ComfyUI-ADMotionDirector" + ], + "install_type": "git-clone", + "reference": "https://github.com/kijai/ComfyUI-ADMotionDirector", + "title": "Animatediff MotionLoRA Trainer" + }, + { + "author": "hhhzzyang", + "description": "Nodes: LamaaModelLoad, LamaApply, YamlConfigLoader. a costumer node is realized to remove anything/inpainting anything from a picture by mask inpainting.[w/WARN:This extension includes the entire model, which can result in a very long initial installation time, and there may be some compatibility issues with older dependencies and ComfyUI.]", + "files": [ + "https://github.com/hhhzzyang/Comfyui_Lama" + ], + "install_type": "git-clone", + "reference": "https://github.com/hhhzzyang/Comfyui_Lama", + "title": "Comfyui-Lama" + }, + { + "author": "thedyze", + "description": "Customize the information saved in file- and folder names. Use the values of sampler parameters as part of file or folder names. Save your positive & negative prompt as entries in a JSON (text) file, in each folder.", + "files": [ + "https://github.com/thedyze/save-image-extended-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/thedyze/save-image-extended-comfyui", + "title": "Save Image Extended for ComfyUI" + }, + { + "author": "SOELexicon", + "description": "ComfyUI-LexTools is a Python-based image processing and analysis toolkit that uses machine learning models for semantic image segmentation, image scoring, and image captioning.", + "files": [ + "https://github.com/SOELexicon/ComfyUI-LexTools" + ], + "install_type": "git-clone", + "reference": "https://github.com/SOELexicon/ComfyUI-LexTools", + "title": "ComfyUI-LexTools" + }, + { + "author": "mikkel", + "description": "The ComfyUI Text Overlay Plugin provides functionalities for superimposing text on images. Users can select different font types, set text size, choose color, and adjust the text's position on the image.", + "files": [ + "https://github.com/mikkel/ComfyUI-text-overlay" + ], + "install_type": "git-clone", + "reference": "https://github.com/mikkel/ComfyUI-text-overlay", + "title": "ComfyUI - Text Overlay Plugin" + }, + { + "author": "avatechai", + "description": "Include nodes for sam + bpy operation, that allows workflow creations for generative 2d character rig.", + "files": [ + "https://github.com/avatechai/avatar-graph-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/avatechai/avatar-graph-comfyui", + "title": "avatar-graph-comfyui" + }, + { + "author": "TRI3D-LC", + "description": "Nodes: tri3d-extract-hand, tri3d-fuzzification, tri3d-position-hands, tri3d-atr-parse.", + "files": [ + "https://github.com/TRI3D-LC/tri3d-comfyui-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/TRI3D-LC/tri3d-comfyui-nodes", + "title": "tri3d-comfyui-nodes" + }, + { + "author": "storyicon", + "description": "Based on GroundingDino and SAM, use semantic strings to segment any element in an image. The comfyui version of sd-webui-segment-anything.", + "files": [ + "https://github.com/storyicon/comfyui_segment_anything" + ], + "install_type": "git-clone", + "reference": "https://github.com/storyicon/comfyui_segment_anything", + "title": "segment anything" + }, + { + "author": "a1lazydog", + "description": "Load mp3 files and use the audio nodes to power animations and prompt scheduling. Use with FizzNodes.", + "files": [ + "https://github.com/a1lazydog/ComfyUI-AudioScheduler" + ], + "install_type": "git-clone", + "reference": "https://github.com/a1lazydog/ComfyUI-AudioScheduler", + "title": "ComfyUI-AudioScheduler" + }, + { + "author": "whatbirdisthat", + "description": "Cyberdolphin Suite of ComfyUI nodes for wiring up things.", + "files": [ + "https://github.com/whatbirdisthat/cyberdolphin" + ], + "install_type": "git-clone", + "reference": "https://github.com/whatbirdisthat/cyberdolphin", + "title": "cyberdolphin" + }, + { + "author": "chrish-slingshot", + "description": "A mixture of effects and quality of life nodes. Nodes: ImageGlitcher (gives an image a cool glitchy effect), ColorStylizer (highlights a single color in an image), QueryLocalLLM (queries a local LLM API though oobabooga), SDXLReslution (resolution picker for the standard SDXL resolutions, the complete list), SDXLResolutionSplit (splits the SDXL resolution into width and height). ", + "files": [ + "https://github.com/chrish-slingshot/CrasHUtils" + ], + "install_type": "git-clone", + "reference": "https://github.com/chrish-slingshot/CrasHUtils", + "title": "CrasH Utils" + }, + { + "author": "spinagon", + "description": "Nodes: Image Resize (seam carving). Seam carving (image resize) for ComfyUI. Based on [a/https://github.com/li-plus/seam-carving](https://github.com/li-plus/seam-carving). With seam carving algorithm, the image could be intelligently resized while keeping the important contents undistorted. The carving process could be further guided, so that an object could be removed from the image without apparent artifacts.", + "files": [ + "https://github.com/spinagon/ComfyUI-seam-carving" + ], + "install_type": "git-clone", + "reference": "https://github.com/spinagon/ComfyUI-seam-carving", + "title": "ComfyUI-seam-carving" + }, + { + "author": "YMC", + "description": "ymc 's nodes for comfyui. This extension is composed of nodes that provide various utility features such as text, region, and I/O.", + "files": [ + "https://github.com/YMC-GitHub/ymc-node-suite-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/YMC-GitHub/ymc-node-suite-comfyui", + "title": "ymc-node-suite-comfyui" + }, + { + "author": "chibiace", + "description": "Nodes:Loader, Prompts, ImageTool, Wildcards, LoadEmbedding, ConditionText, SaveImages, ...", + "files": [ + "https://github.com/chibiace/ComfyUI-Chibi-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/chibiace/ComfyUI-Chibi-Nodes", + "title": "ComfyUI-Chibi-Nodes" + }, + { + "author": "DigitalIO", + "description": "Wildcard implementation that can be reproduced with workflows.", + "files": [ + "https://github.com/DigitalIO/ComfyUI-stable-wildcards" + ], + "install_type": "git-clone", + "reference": "https://github.com/DigitalIO/ComfyUI-stable-wildcards", + "title": "ComfyUI-stable-wildcards" + }, + { + "author": "THtianhao", + "description": "Nodes:RetainFace, FaceFusion, RatioMerge2Image, MaskMerge2Image, ReplaceBoxImg, ExpandMaskBox, FaceSkin, SkinRetouching, PortraitEnhancement, ...", + "files": [ + "https://github.com/THtianhao/ComfyUI-Portrait-Maker" + ], + "install_type": "git-clone", + "reference": "https://github.com/THtianhao/ComfyUI-Portrait-Maker", + "title": "ComfyUI-Portrait-Maker" + }, + { + "author": "THtianhao", + "description": "The official ComfyUI version of facechain greatly improves the speed of reasoning and has great custom process controls.", + "files": [ + "https://github.com/THtianhao/ComfyUI-FaceChain" + ], + "install_type": "git-clone", + "reference": "https://github.com/THtianhao/ComfyUI-FaceChain", + "title": "ComfyUI-FaceChain" + }, + { + "author": "zer0TF", + "description": "Adds a configurable folder watcher that auto-converts Comfy metadata into a Civitai-friendly format for automatic resource tagging when you upload images. Oh, and it makes your UI awesome, too. \ud83d\udc9c", + "files": [ + "https://github.com/zer0TF/cute-comfy" + ], + "install_type": "git-clone", + "reference": "https://github.com/zer0TF/cute-comfy", + "title": "Cute Comfy" + }, + { + "author": "chflame163", + "description": "A text-to-speech plugin used under ComfyUI. It utilizes the Microsoft Speech TTS interface to convert text content into MP3 format audio files.", + "files": [ + "https://github.com/chflame163/ComfyUI_MSSpeech_TTS" + ], + "install_type": "git-clone", + "reference": "https://github.com/chflame163/ComfyUI_MSSpeech_TTS", + "title": "ComfyUI_MSSpeech_TTS" + }, + { + "author": "chflame163", + "description": "Nodes:Word Cloud, Load Text File", + "files": [ + "https://github.com/chflame163/ComfyUI_WordCloud" + ], + "install_type": "git-clone", + "reference": "https://github.com/chflame163/ComfyUI_WordCloud", + "title": "ComfyUI_WordCloud" + }, + { + "author": "drustan-hawk", + "description": "This repository contains typed primitives for ComfyUI. The motivation for these primitives is that the standard primitive node cannot be routed.", + "files": [ + "https://github.com/drustan-hawk/primitive-types" + ], + "install_type": "git-clone", + "reference": "https://github.com/drustan-hawk/primitive-types", + "title": "primitive-types" + }, + { + "author": "shadowcz007", + "description": "3D, ScreenShareNode & FloatingVideoNode, SpeechRecognition & SpeechSynthesis, GPT, LoadImagesFromLocal, Layers, Other Nodes, ...", + "files": [ + "https://github.com/shadowcz007/comfyui-mixlab-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/shadowcz007/comfyui-mixlab-nodes", + "title": "comfyui-mixlab-nodes" + }, + { + "author": "shadowcz007", + "description": "Nodes:Detect By Label.", + "files": [ + "https://github.com/shadowcz007/comfyui-ultralytics-yolo" + ], + "install_type": "git-clone", + "reference": "https://github.com/shadowcz007/comfyui-ultralytics-yolo", + "title": "comfyui-ultralytics-yolo" + }, + { + "author": "shadowcz007", + "description": "[a/openai Consistency Decoder](https://github.com/openai/consistencydecoder). After downloading the [a/OpenAI VAE model](https://openaipublic.azureedge.net/diff-vae/c9cebd3132dd9c42936d803e33424145a748843c8f716c0814838bdc8a2fe7cb/decoder.pt), place it in the `model/vae` directory for use.", + "files": [ + "https://github.com/shadowcz007/comfyui-consistency-decoder" + ], + "install_type": "git-clone", + "reference": "https://github.com/shadowcz007/comfyui-consistency-decoder", + "title": "Consistency Decoder" + }, + { + "author": "ostris", + "description": "This is a collection of custom nodes for ComfyUI that I made for some QOL. I will be adding much more advanced ones in the future once I get more familiar with the API.", + "files": [ + "https://github.com/ostris/ostris_nodes_comfyui" + ], + "install_type": "git-clone", + "nodename_pattern": "- Ostris$", + "reference": "https://github.com/ostris/ostris_nodes_comfyui", + "title": "Ostris Nodes ComfyUI" + }, + { + "author": "0xbitches", + "description": "This custom node implements a Latent Consistency Model sampler in ComfyUI. (LCM)", + "files": [ + "https://github.com/0xbitches/ComfyUI-LCM" + ], + "install_type": "git-clone", + "reference": "https://github.com/0xbitches/ComfyUI-LCM", + "title": "Latent Consistency Model for ComfyUI" + }, + { + "author": "aszc-dev", + "description": "This extension contains a set of custom nodes for ComfyUI that allow you to use Core ML models in your ComfyUI workflows. The models can be obtained here, or you can convert your own models using coremltools. The main motivation behind using Core ML models in ComfyUI is to allow you to utilize the ANE (Apple Neural Engine) on Apple Silicon (M1/M2) machines to improve performance.", + "files": [ + "https://github.com/aszc-dev/ComfyUI-CoreMLSuite" + ], + "install_type": "git-clone", + "reference": "https://github.com/aszc-dev/ComfyUI-CoreMLSuite", + "title": "Core ML Suite for ComfyUI" + }, + { + "author": "taabata", + "description": "Nodes:Prompt editing, Word as Image", + "files": [ + "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes/raw/main/SyrianFalconNodes.py" + ], + "install_type": "copy", + "reference": "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes", + "title": "Syrian Falcon Nodes" + }, + { + "author": "taabata", + "description": "ComfyUI custom nodes for inpainting/outpainting using the new latent consistency model (LCM)", + "files": [ + "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy" + ], + "install_type": "git-clone", + "reference": "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy", + "title": "LCM_Inpaint-Outpaint_Comfy" + }, + { + "author": "noxinias", + "description": "Nodes: Noxin Complete Chime, Noxin Scaled Resolutions, Load from Noxin Prompt Library, Save to Noxin Prompt Library", + "files": [ + "https://github.com/noxinias/ComfyUI_NoxinNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/noxinias/ComfyUI_NoxinNodes", + "title": "ComfyUI_NoxinNodes" + }, + { + "author": "apesplat", + "description": "Extensions/Patches: Enables linking float and integer inputs and ouputs. Values are automatically cast to the correct type and clamped to the correct range. Works with both builtin and custom nodes.[w/NOTE: This repo patches ComfyUI's validate_inputs and map_node_over_list functions while running. May break depending on your version of ComfyUI. Can be deactivated in config.yaml.]Nodes: A collection of nodes for facilitating the generation of XY plots. Capable of plotting changes over most primitive values.", + "files": [ + "https://github.com/GMapeSplat/ComfyUI_ezXY" + ], + "install_type": "git-clone", + "reference": "https://github.com/GMapeSplat/ComfyUI_ezXY", + "title": "ezXY scripts and nodes" + }, + { + "author": "kinfolk0117", + "description": "Nodes:TileSplit, TileMerge.", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_SimpleTiles" + ], + "install_type": "git-clone", + "reference": "https://github.com/kinfolk0117/ComfyUI_SimpleTiles", + "title": "SimpleTiles" + }, + { + "author": "kinfolk0117", + "description": "Nodes:GradientPatchModelAddDownscale (Kohya Deep Shrink).", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink" + ], + "install_type": "git-clone", + "reference": "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink", + "title": "ComfyUI_GradientDeepShrink" + }, + { + "author": "kinfolk0117", + "description": "Proof of concent on how to use IPAdapter to control tiled upscaling. NOTE: You need to have 'ComfyUI_IPAdapter_plus' installed.", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter" + ], + "install_type": "git-clone", + "reference": "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter", + "title": "TiledIPAdapter" + }, + { + "author": "kinfolk0117", + "description": "Use [a/Pilgram2](https://github.com/mgineer85/pilgram2) filters in ComfyUI", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_Pilgram" + ], + "install_type": "git-clone", + "reference": "https://github.com/kinfolk0117/ComfyUI_Pilgram", + "title": "ComfyUI_Pilgram" + }, + { + "author": "Fictiverse", + "description": "Nodes:Color correction.", + "files": [ + "https://github.com/Fictiverse/ComfyUI_Fictiverse" + ], + "install_type": "git-clone", + "reference": "https://github.com/Fictiverse/ComfyUI_Fictiverse", + "title": "ComfyUI Fictiverse Nodes" + }, + { + "author": "idrirap", + "description": "This project is a fork of [a/https://github.com/Extraltodeus/LoadLoraWithTags](https://github.com/Extraltodeus/LoadLoraWithTags) The aim of these custom nodes is to get an easy access to the tags used to trigger a lora.", + "files": [ + "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words" + ], + "install_type": "git-clone", + "reference": "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words", + "title": "ComfyUI-Lora-Auto-Trigger-Words" + }, + { + "author": "aianimation55", + "description": "It's a super simple custom node for Comfy UI, to generate text, with a font size option. Useful for bigger labelling of nodes, helpful for wider screen captures or tutorials. Plus you can of course use the text within your generations.", + "files": [ + "https://github.com/aianimation55/ComfyUI-FatLabels" + ], + "install_type": "git-clone", + "reference": "https://github.com/aianimation55/ComfyUI-FatLabels", + "title": "Comfy UI FatLabels" + }, + { + "author": "noEmbryo", + "description": "PromptTermList (1-6): are some nodes that help with the creation of Prompts inside ComfyUI. Resolution Scale outputs image dimensions using a scale factor. Regex Text Chopper outputs the chopped parts of a text using RegEx.", + "files": [ + "https://github.com/noembryo/ComfyUI-noEmbryo" + ], + "install_type": "git-clone", + "reference": "https://github.com/noembryo/ComfyUI-noEmbryo", + "title": "noEmbryo nodes" + }, + { + "author": "mikkel", + "description": "The ComfyUI Mask Bounding Box Plugin provides functionalities for selecting a specific size mask from an image. Can be combined with ClipSEG to replace any aspect of an SDXL image with an SD1.5 output.", + "files": [ + "https://github.com/mikkel/comfyui-mask-boundingbox" + ], + "install_type": "git-clone", + "reference": "https://github.com/mikkel/comfyui-mask-boundingbox", + "title": "ComfyUI - Mask Bounding Box" + }, + { + "author": "ParmanBabra", + "description": "Nodes:Multi Lora Loader, Random (Prompt), Combine (Prompt), CSV Prompts Loader", + "files": [ + "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts" + ], + "install_type": "git-clone", + "reference": "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts", + "title": "ComfyUI-Malefish-Custom-Scripts" + }, + { + "author": "IAmMatan.com", + "description": "This extension adds nodes that allow you to easily serve your workflow (for example using a discord bot) ", + "files": [ + "https://github.com/matan1905/ComfyUI-Serving-Toolkit" + ], + "install_type": "git-clone", + "reference": "https://github.com/matan1905/ComfyUI-Serving-Toolkit", + "title": "ComfyUI Serving toolkit" + }, + { + "author": "PCMonsterx", + "description": "CSV Loader for prompt building within ComfyUI interface. Allows access to positive/negative prompts associated with a name. Selections are being pulled from CSV files.", + "files": [ + "https://github.com/PCMonsterx/ComfyUI-CSV-Loader" + ], + "install_type": "git-clone", + "reference": "https://github.com/PCMonsterx/ComfyUI-CSV-Loader", + "title": "ComfyUI-CSV-Loader" + }, + { + "author": "Trung0246", + "description": "Random nodes for ComfyUI I made to solve my struggle with ComfyUI (ex: pipe, process). Have varying quality.", + "files": [ + "https://github.com/Trung0246/ComfyUI-0246" + ], + "install_type": "git-clone", + "reference": "https://github.com/Trung0246/ComfyUI-0246", + "title": "ComfyUI-0246" + }, + { + "author": "fexli", + "description": "Nodes:FEImagePadForOutpaint, FEColorOut, FEColor2Image, FERandomizedColor2Image", + "files": [ + "https://github.com/fexli/fexli-util-node-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/fexli/fexli-util-node-comfyui", + "title": "fexli-util-node-comfyui" + }, + { + "author": "AbyssYuan0", + "description": "Nodes:ImageOverlap-badger, FloatToInt-badger, IntToString-badger, FloatToString-badger, ImageNormalization-badger, ImageScaleToSide-badger, NovelToFizz-badger.", + "files": [ + "https://github.com/AbyssYuan0/ComfyUI_BadgerTools" + ], + "install_type": "git-clone", + "reference": "https://github.com/AbyssYuan0/ComfyUI_BadgerTools", + "title": "ComfyUI_BadgerTools" + }, + { + "author": "palant", + "description": "This custom node provides various tools for resizing images. The goal is resizing without distorting proportions, yet without having to perform any calculations with the size of the original image. If a mask is present, it is resized and modified along with the image.", + "files": [ + "https://github.com/palant/image-resize-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/palant/image-resize-comfyui", + "title": "Image Resize for ComfyUI" + }, + { + "author": "palant", + "description": "This tool will turn entire workflows or parts of them into single integrated nodes. In a way, it is similar to the Node Templates functionality but hides the inner structure. This is useful if all you want is to reuse and quickly configure a bunch of nodes without caring how they are interconnected.", + "files": [ + "https://github.com/palant/integrated-nodes-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/palant/integrated-nodes-comfyui", + "title": "Integrated Nodes for ComfyUI" + }, + { + "author": "palant", + "description": "This custom node is largely identical to the usual Save Image but allows saving images also in JPEG and WEBP formats, the latter with both lossless and lossy compression. Metadata is embedded in the images as usual, and the resulting images can be used to load a workflow.", + "files": [ + "https://github.com/palant/extended-saveimage-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/palant/extended-saveimage-comfyui", + "title": "Extended Save Image for ComfyUI" + }, + { + "author": "whmc76", + "description": "Nodes:Openpose Editor Plus", + "files": [ + "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus" + ], + "install_type": "git-clone", + "reference": "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus", + "title": "ComfyUI-Openpose-Editor-Plus" + }, + { + "author": "martijnat", + "description": "a ComfyUI plugin for previewing latents without vae decoding. Useful for showing intermediate results and can be used a faster 'preview image' if you don't wan't to use vae decode.", + "files": [ + "https://github.com/martijnat/comfyui-previewlatent" + ], + "install_type": "git-clone", + "reference": "https://github.com/martijnat/comfyui-previewlatent", + "title": "comfyui-previewlatent" + }, + { + "author": "banodoco", + "description": "Steerable Motion is a ComfyUI node for batch creative interpolation. Our goal is to feature the best methods for steering motion with images as video models evolve.", + "files": [ + "https://github.com/banodoco/steerable-motion" + ], + "install_type": "git-clone", + "reference": "https://github.com/banodoco/steerable-motion", + "title": "Steerable Motion" + }, + { + "author": "gemell1", + "description": "Nodes:GMIC Image Processing.", + "files": [ + "https://github.com/gemell1/ComfyUI_GMIC" + ], + "install_type": "git-clone", + "reference": "https://github.com/gemell1/ComfyUI_GMIC", + "title": "ComfyUI_GMIC" + }, + { + "author": "LonicaMewinsky", + "description": "Nodes:BreakFrames, GetKeyFrames, MakeGrid.", + "files": [ + "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame" + ], + "install_type": "git-clone", + "reference": "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame", + "title": "ComfyBreakAnim" + }, + { + "author": "TheBarret", + "description": "Nodes:Prompter, RF Noise, SeedMod.", + "files": [ + "https://github.com/TheBarret/ZSuite" + ], + "install_type": "git-clone", + "reference": "https://github.com/TheBarret/ZSuite", + "title": "ZSuite" + }, + { + "author": "romeobuilderotti", + "description": "Add custom Metadata fields to your saved PNG files.", + "files": [ + "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata" + ], + "install_type": "git-clone", + "reference": "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata", + "title": "ComfyUI PNG Metadata" + }, + { + "author": "ka-puna", + "description": "NOTE: Concatenate Strings, Format Datetime String, Integer Caster, Multiline String, Truncate String. Yet Another Node Collection, a repository of simple nodes for ComfyUI. This repository eases the addition or removal of custom nodes to itself.", + "files": [ + "https://github.com/ka-puna/comfyui-yanc" + ], + "install_type": "git-clone", + "reference": "https://github.com/ka-puna/comfyui-yanc", + "title": "comfyui-yanc" + }, + { + "author": "amorano", + "description": "Compose like Substance Designer. Webcams, Media Streams (in/out), Tick animation, Color correction, Geometry manipulation, Pixel shader, Polygonal shape generator, Remap images gometry and color, Heavily inspired by WAS and MTB Node Suites.", + "files": [ + "https://github.com/Amorano/Jovimetrix" + ], + "install_type": "git-clone", + "nodename_pattern": " \\(jov\\)$", + "reference": "https://github.com/Amorano/Jovimetrix", + "title": "Jovimetrix Composition Nodes" + }, + { + "author": "Umikaze-job", + "description": "This extension simply connects the nodes and specifies the output path of the generated images to a manageable path.", + "files": [ + "https://github.com/Umikaze-job/select_folder_path_easy" + ], + "install_type": "git-clone", + "reference": "https://github.com/Umikaze-job/select_folder_path_easy", + "title": "select_folder_path_easy" + }, + { + "author": "Niutonian", + "description": "Nodes:Noodle webcam is a node that records frames and send them to your favourite node.", + "files": [ + "https://github.com/Niutonian/ComfyUi-NoodleWebcam" + ], + "install_type": "git-clone", + "reference": "https://github.com/Niutonian/ComfyUi-NoodleWebcam", + "title": "ComfyUi-NoodleWebcam" + }, + { + "author": "Feidorian", + "description": "This extension provides various custom nodes. literals, loaders, logic, output, switches", + "files": [ + "https://github.com/Feidorian/feidorian-ComfyNodes" + ], + "install_type": "git-clone", + "nodename_pattern": "^Feidorian_", + "reference": "https://github.com/Feidorian/feidorian-ComfyNodes", + "title": "feidorian-ComfyNodes" + }, + { + "author": "wutipong", + "description": "Nodes:Create N-Token String", + "files": [ + "https://github.com/wutipong/ComfyUI-TextUtils" + ], + "install_type": "git-clone", + "reference": "https://github.com/wutipong/ComfyUI-TextUtils", + "title": "ComfyUI-TextUtils" + }, + { + "author": "natto-maki", + "description": "Nodes:OpenAI DALLe3, OpenAI Translate to English, String Function, Seed Generator", + "files": [ + "https://github.com/natto-maki/ComfyUI-NegiTools" + ], + "install_type": "git-clone", + "reference": "https://github.com/natto-maki/ComfyUI-NegiTools", + "title": "ComfyUI-NegiTools" + }, + { + "author": "LonicaMewinsky", + "description": "Nodes:SaveTifImage. ComfyUI custom node for purpose of saving image as uint16 tif file.", + "files": [ + "https://github.com/LonicaMewinsky/ComfyUI-RawSaver" + ], + "install_type": "git-clone", + "reference": "https://github.com/LonicaMewinsky/ComfyUI-RawSaver", + "title": "ComfyUI-RawSaver" + }, + { + "author": "jojkaart", + "description": "Nodes:LCMScheduler, SamplerLCMAlternative, SamplerLCMCycle. ComfyUI Custom Sampler nodes that add a new improved LCM sampler functions", + "files": [ + "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative" + ], + "install_type": "git-clone", + "reference": "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative", + "title": "ComfyUI-sampler-lcm-alternative" + }, + { + "author": "GTSuya-Studio", + "description": "ComfyUI-GTSuya-Nodes is a ComfyUI extension designed to add several wildcards supports into ComfyUI. Wildcards allow you to use __name__ syntax in your prompt to get a random line from a file named name.txt in a wildcards directory.", + "files": [ + "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes", + "title": "ComfyUI-GTSuya-Nodes" + }, + { + "author": "oyvindg", + "description": "Nodes: BinaryImageMask, ImagePadding, LoadLastCreatedImage, RandomMask, TransparentImage.", + "files": [ + "https://github.com/oyvindg/ComfyUI-TrollSuite" + ], + "install_type": "git-clone", + "reference": "https://github.com/oyvindg/ComfyUI-TrollSuite", + "title": "ComfyUI-TrollSuite" + }, + { + "author": "drago87", + "description": "Nodes:File Padding, Image Info, VAE Loader With Name", + "files": [ + "https://github.com/drago87/ComfyUI_Dragos_Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/drago87/ComfyUI_Dragos_Nodes", + "title": "ComfyUI_Dragos_Nodes" + }, + { + "author": "ansonkao", + "description": "Nodes: Mask to Centroid, Mask to Eigenvector. A small collection of custom nodes for use with ComfyUI, for geometry calculations", + "files": [ + "https://github.com/ansonkao/comfyui-geometry" + ], + "install_type": "git-clone", + "reference": "https://github.com/ansonkao/comfyui-geometry", + "title": "comfyui-geometry" + }, + { + "author": "bronkula", + "description": "Nodes:Fit Size From Int/Image/Resize, Load Image And Resize To Fit, Pick Image From Batch/List, Crop Image Into Even Pieces, Image Region To Mask... A simple set of nodes for making an image fit within a bounding box", + "files": [ + "https://github.com/bronkula/comfyui-fitsize" + ], + "install_type": "git-clone", + "reference": "https://github.com/bronkula/comfyui-fitsize", + "title": "comfyui-fitsize" + }, + { + "author": "toyxyz", + "description": "This node was created to send a webcam to ComfyUI in real time. This node is recommended for use with LCM.", + "files": [ + "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes", + "title": "ComfyUI_toyxyz_test_nodes" + }, + { + "author": "thecooltechguy", + "description": "Easily use Stable Video Diffusion inside ComfyUI!", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion" + ], + "install_type": "git-clone", + "reference": "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion", + "title": "ComfyUI Stable Video Diffusion" + }, + { + "author": "thecooltechguy", + "description": "The easiest way to run & share any ComfyUI workflow [a/https://comfyrun.com](https://comfyrun.com)", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-ComfyRun" + ], + "install_type": "git-clone", + "reference": "https://github.com/thecooltechguy/ComfyUI-ComfyRun", + "title": "ComfyUI-ComfyRun" + }, + { + "author": "thecooltechguy", + "description": "Easily use Magic Animate within ComfyUI!\n[w/WARN: This extension requires 15GB disk space.]", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-MagicAnimate" + ], + "install_type": "git-clone", + "reference": "https://github.com/thecooltechguy/ComfyUI-MagicAnimate", + "title": "ComfyUI-MagicAnimate" + }, + { + "author": "thecooltechguy", + "description": "The best way to run, share, & discover thousands of ComfyUI workflows.", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-ComfyWorkflows" + ], + "install_type": "git-clone", + "reference": "https://github.com/thecooltechguy/ComfyUI-ComfyWorkflows", + "title": "ComfyUI-ComfyWorkflows" + }, + { + "author": "Danand", + "description": " If you want to draw two different characters together without blending their features, so you could try to check out this custom node.", + "files": [ + "https://github.com/Danand/ComfyUI-ComfyCouple" + ], + "install_type": "git-clone", + "reference": "https://github.com/Danand/ComfyUI-ComfyCouple", + "title": "ComfyUI-ComfyCouple" + }, + { + "author": "42lux", + "description": "A NSFW/Safety Checker Node for ComfyUI.", + "files": [ + "https://github.com/42lux/ComfyUI-safety-checker" + ], + "install_type": "git-clone", + "reference": "https://github.com/42lux/ComfyUI-safety-checker", + "title": "ComfyUI-safety-checker" + }, + { + "author": "sergekatzmann", + "description": "Nodes:Image Square Adapter Node, Image Resize And Crop Node", + "files": [ + "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack" + ], + "install_type": "git-clone", + "reference": "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack", + "title": "ComfyUI_Nimbus-Pack" + }, + { + "author": "komojini", + "description": "Nodes:XL DreamBooth LoRA, S3 Bucket LoRA", + "files": [ + "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes", + "title": "ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes" + }, + { + "author": "komojini", + "description": "Nodes:YouTube Video Loader. Custom ComfyUI Nodes for video generation", + "files": [ + "https://github.com/komojini/komojini-comfyui-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/komojini/komojini-comfyui-nodes", + "title": "komojini-comfyui-nodes" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Nodes:Text_Image_Zho, Text_Image_Multiline_Zho, RGB_Image_Zho, AlphaChanelAddByMask, ImageComposite_Zho, ...", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite", + "title": "ComfyUI-Text_Image-Composite [WIP]" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Using Gemini-pro & Gemini-pro-vision in ComfyUI.", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini", + "title": "ComfyUI-Gemini" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "ComfyUI Portrait Master \u7b80\u4f53\u4e2d\u6587\u7248.", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn", + "title": "comfyui-portrait-master-zh-cn" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Nodes:Q-Align Scoring. Implementation of [a/Q-Align](https://arxiv.org/abs/2312.17090) for ComfyUI", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align", + "title": "ComfyUI-Q-Align" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Unofficial implementation of [a/InstantID](https://github.com/InstantID/InstantID) for ComfyUI", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID", + "title": "ComfyUI-InstantID" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Unofficial implementation of [a/PhotoMaker](https://github.com/TencentARC/PhotoMaker) for ComfyUI", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO", + "title": "ComfyUI PhotoMaker (ZHO)" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "QWen-VL-Plus & QWen-VL-Max in ComfyUI", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API", + "title": "ComfyUI-Qwen-VL-API" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "My Workflows + Auxiliary nodes for Stable Video Diffusion (SVD)", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO", + "title": "ComfyUI-SVD-ZHO (WIP)" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Unofficial implementation of [a/SegMoE: Segmind Mixture of Diffusion Experts](https://github.com/segmind/segmoe) for ComfyUI", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE", + "title": "ComfyUI SegMoE" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Unofficial implementation of [a/YOLO-World + EfficientSAM](https://huggingface.co/spaces/SkalskiP/YOLO-World) & [a/YOLO-World](https://github.com/AILab-CVC/YOLO-World) for ComfyUI", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-YoloWorld-EfficientSAM" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-YoloWorld-EfficientSAM", + "title": "ComfyUI YoloWorld-EfficientSAM" + }, + { + "author": "kenjiqq", + "description": "Nodes:Any List, Image Accumulator Start, Image Accumulator End, Load Lines From Text File, XY Grid Helper, Slice List, Axis To String/Int/Float/Model, ...", + "files": [ + "https://github.com/kenjiqq/qq-nodes-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/kenjiqq/qq-nodes-comfyui", + "title": "qq-nodes-comfyui" + }, + { + "author": "80sVectorz", + "description": "Adds Static Primitives to ComfyUI. Mostly to work with reroute nodes", + "files": [ + "https://github.com/80sVectorz/ComfyUI-Static-Primitives" + ], + "install_type": "git-clone", + "reference": "https://github.com/80sVectorz/ComfyUI-Static-Primitives", + "title": "ComfyUI-Static-Primitives" + }, + { + "author": "AbdullahAlfaraj", + "description": "Nodes: load Image with metadata, get config data, load image from base64 string, Load Loras From Prompt, Generate Latent Noise, Combine Two Latents Into Batch, General Purpose Controlnet Unit, ControlNet Script, Content Mask Latent, Auto-Photoshop-SD Seed, Expand and Blur the Mask", + "files": [ + "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD" + ], + "install_type": "git-clone", + "reference": "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD", + "title": "Comfy-Photoshop-SD" + }, + { + "author": "zhuanqianfish", + "description": "Capture window content from other programs, easyway combined with LCM for real-time painting", + "files": [ + "https://github.com/zhuanqianfish/ComfyUI-EasyNode" + ], + "install_type": "git-clone", + "reference": "https://github.com/zhuanqianfish/ComfyUI-EasyNode", + "title": "EasyCaptureNode for ComfyUI" + }, + { + "author": "discopixel-studio", + "description": "Nodes:TransformTemplateOntoFaceMask, ... A small collection of custom nodes for use with ComfyUI, by Discopixel", + "files": [ + "https://github.com/discopixel-studio/comfyui-discopixel" + ], + "install_type": "git-clone", + "reference": "https://github.com/discopixel-studio/comfyui-discopixel", + "title": "ComfyUI Discopixel Nodes" + }, + { + "author": "zcfrank1st", + "description": "Nodes: Yolov8Detection, Yolov8Segmentation. Deadly simple yolov8 comfyui plugin", + "files": [ + "https://github.com/zcfrank1st/Comfyui-Yolov8" + ], + "install_type": "git-clone", + "reference": "https://github.com/zcfrank1st/Comfyui-Yolov8", + "title": "ComfyUI Yolov8" + }, + { + "author": "SoftMeng", + "description": "Nodes: ComfyUI Mexx Styler, ComfyUI Mexx Styler Advanced", + "files": [ + "https://github.com/SoftMeng/ComfyUI_Mexx_Styler" + ], + "install_type": "git-clone", + "reference": "https://github.com/SoftMeng/ComfyUI_Mexx_Styler", + "title": "ComfyUI_Mexx_Styler" + }, + { + "author": "SoftMeng", + "description": "Nodes: ComfyUI_Mexx_Poster", + "files": [ + "https://github.com/SoftMeng/ComfyUI_Mexx_Poster" + ], + "install_type": "git-clone", + "reference": "https://github.com/SoftMeng/ComfyUI_Mexx_Poster", + "title": "ComfyUI_Mexx_Poster" + }, + { + "author": "wmatson", + "description": "Nodes: HTTP POST, Empty Dict, Assoc Str, Assoc Dict, Assoc Img, Load Img From URL (EZ), Load Img Batch From URLs (EZ), Video Combine + upload (EZ), ...", + "files": [ + "https://github.com/wmatson/easy-comfy-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/wmatson/easy-comfy-nodes", + "title": "easy-comfy-nodes" + }, + { + "author": "DrJKL", + "description": "A ComfyUI extension to add spatial anchors/waypoints to better navigate large workflows.", + "files": [ + "https://github.com/DrJKL/ComfyUI-Anchors" + ], + "install_type": "git-clone", + "reference": "https://github.com/DrJKL/ComfyUI-Anchors", + "title": "ComfyUI-Anchors" + }, + { + "author": "vanillacode314", + "description": "A simple wildcard node for ComfyUI. Can also be used a style prompt node.", + "files": [ + "https://github.com/vanillacode314/SimpleWildcardsComfyUI" + ], + "install_type": "git-clone", + "pip": [ + "pipe" + ], + "reference": "https://github.com/vanillacode314/SimpleWildcardsComfyUI", + "title": "Simple Wildcard" + }, + { + "author": "WebDev9000", + "description": "Nodes:Ignore Braces, Settings Switch.", + "files": [ + "https://github.com/WebDev9000/WebDev9000-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/WebDev9000/WebDev9000-Nodes", + "title": "WebDev9000-Nodes" + }, + { + "author": "Scholar01", + "description": "Nodes:Keyframe Part, Keyframe Interpolation Part, Keyframe Apply.", + "files": [ + "https://github.com/Scholar01/ComfyUI-Keyframe" + ], + "install_type": "git-clone", + "reference": "https://github.com/Scholar01/ComfyUI-Keyframe", + "title": "SComfyUI-Keyframe" + }, + { + "author": "Haoming02", + "description": "This is the ComfyUI port of the joint research between me and TimothyAlexisVass. For more information, check out the original [a/Extension](https://github.com/Haoming02/sd-webui-diffusion-cg) for Automatic1111.", + "files": [ + "https://github.com/Haoming02/comfyui-diffusion-cg" + ], + "install_type": "git-clone", + "reference": "https://github.com/Haoming02/comfyui-diffusion-cg", + "title": "ComfyUI Diffusion Color Grading" + }, + { + "author": "Haoming02", + "description": "This is an Extension for ComfyUI, which helps formatting texts.", + "files": [ + "https://github.com/Haoming02/comfyui-prompt-format" + ], + "install_type": "git-clone", + "reference": "https://github.com/Haoming02/comfyui-prompt-format", + "title": "comfyui-prompt-format" + }, + { + "author": "Haoming02", + "description": "This is an Extension for ComfyUI, which adds a button, CLS, to clear the console window.", + "files": [ + "https://github.com/Haoming02/comfyui-clear-screen" + ], + "install_type": "git-clone", + "reference": "https://github.com/Haoming02/comfyui-clear-screen", + "title": "ComfyUI Clear Screen" + }, + { + "author": "Haoming02", + "description": "This is an Extension for ComfyUI, which moves the menu to the specified corner on startup.", + "files": [ + "https://github.com/Haoming02/comfyui-menu-anchor" + ], + "install_type": "git-clone", + "reference": "https://github.com/Haoming02/comfyui-menu-anchor", + "title": "ComfyUI Menu Anchor" + }, + { + "author": "Haoming02", + "description": "This is an Extension for ComfyUI, which moves the menu to the specified corner on startup.", + "files": [ + "https://github.com/Haoming02/comfyui-tab-handler" + ], + "install_type": "git-clone", + "reference": "https://github.com/Haoming02/comfyui-tab-handler", + "title": "ComfyUI Tab Handler" + }, + { + "author": "Haoming02", + "description": "This is an Extension for ComfyUI, which allows you to control the logic flow with just one click!", + "files": [ + "https://github.com/Haoming02/comfyui-floodgate" + ], + "install_type": "git-clone", + "reference": "https://github.com/Haoming02/comfyui-floodgate", + "title": "ComfyUI Floodgate" + }, + { + "author": "bedovyy", + "description": "This extension helps generate images through NAI.", + "files": [ + "https://github.com/bedovyy/ComfyUI_NAIDGenerator" + ], + "install_type": "git-clone", + "reference": "https://github.com/bedovyy/ComfyUI_NAIDGenerator", + "title": "ComfyUI_NAIDGenerator" + }, + { + "author": "Off-Live", + "description": "Nodes:Image Crop Fit, OFF SEGS to Image, Crop Center wigh SEGS, Watermarking, GW Number Formatting Node.", + "files": [ + "https://github.com/Off-Live/ComfyUI-off-suite" + ], + "install_type": "git-clone", + "reference": "https://github.com/Off-Live/ComfyUI-off-suite", + "title": "ComfyUI-off-suite" + }, + { + "author": "ningxiaoxiao", + "description": "Real-time input output node for ComfyUI by NDI. Leveraging the powerful linking capabilities of NDI, you can access NDI video stream frames and send images generated by the model to NDI video streams.", + "files": [ + "https://github.com/ningxiaoxiao/comfyui-NDI" + ], + "install_type": "git-clone", + "pip": [ + "ndi-python" + ], + "reference": "https://github.com/ningxiaoxiao/comfyui-NDI", + "title": "comfyui-NDI" + }, + { + "author": "subtleGradient", + "description": "Two-finger scrolling (vertical and horizontal) to pan the canvas. Two-finger pinch to zoom in and out. Command-scroll up and down to zoom in and out. Fixes [comfyanonymous/ComfyUI#2059](https://github.com/comfyanonymous/ComfyUI/issues/2059).", + "files": [ + "https://github.com/subtleGradient/TinkerBot-tech-for-ComfyUI-Touchpad" + ], + "install_type": "git-clone", + "reference": "https://github.com/subtleGradient/TinkerBot-tech-for-ComfyUI-Touchpad", + "title": "Touchpad two-finger gesture support for macOS" + }, + { + "author": "zcfrank1st", + "description": "Nodes:visual_anagrams_sample, visual_anagrams_animate", + "files": [ + "https://github.com/zcfrank1st/comfyui_visual_anagrams" + ], + "install_type": "git-clone", + "reference": "https://github.com/zcfrank1st/comfyui_visual_anagrams", + "title": "comfyui_visual_anagram" + }, + { + "author": "Electrofried", + "description": "A simply node for hooking in to openAI API based servers via comfyUI", + "files": [ + "https://github.com/Electrofried/ComfyUI-OpenAINode" + ], + "install_type": "git-clone", + "reference": "https://github.com/Electrofried/ComfyUI-OpenAINode", + "title": "OpenAINode" + }, + { + "author": "AustinMroz", + "description": "Experimental utility nodes with a focus on manipulation of noised latents", + "files": [ + "https://github.com/AustinMroz/ComfyUI-SpliceTools" + ], + "install_type": "git-clone", + "reference": "https://github.com/AustinMroz/ComfyUI-SpliceTools", + "title": "SpliceTools" + }, + { + "author": "11cafe", + "description": "A ComfyUI custom node for project management to centralize the management of all your workflows in one place. Seamlessly switch between workflows, create and update them within a single workspace, like Google Docs.", + "files": [ + "https://github.com/11cafe/comfyui-workspace-manager" + ], + "install_type": "git-clone", + "reference": "https://github.com/11cafe/comfyui-workspace-manager", + "title": "ComfyUI Workspace Manager - Comfyspace" + }, + { + "author": "knuknX", + "description": "Nodes:BatchImageResizeProcessor, SingleImagePathLoader, SingleImageUrlLoader", + "files": [ + "https://github.com/knuknX/ComfyUI-Image-Tools" + ], + "install_type": "git-clone", + "reference": "https://github.com/knuknX/ComfyUI-Image-Tools", + "title": "ComfyUI-Image-Tools" + }, + { + "author": "jtrue", + "description": "A collection of nodes powering a tensor oracle on a home network with automation", + "files": [ + "https://github.com/jtrue/ComfyUI-JaRue" + ], + "install_type": "git-clone", + "nodename_pattern": "_jru$", + "reference": "https://github.com/jtrue/ComfyUI-JaRue", + "title": "ComfyUI-JaRue" + }, + { + "author": "filliptm", + "description": "Nodes:FL Image Randomizer. The start of a pack that I will continue to build out to fill the gaps of nodes and functionality that I feel is missing in comfyUI", + "files": [ + "https://github.com/filliptm/ComfyUI_Fill-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/filliptm/ComfyUI_Fill-Nodes", + "title": "ComfyUI_Fill-Nodes" + }, + { + "author": "zfkun", + "description": "A collection of nodes for common tools, including text preview, text translation (multi-platform, multi-language), image loader, webcamera capture.", + "files": [ + "https://github.com/zfkun/ComfyUI_zfkun" + ], + "install_type": "git-clone", + "reference": "https://github.com/zfkun/ComfyUI_zfkun", + "title": "ComfyUI_zfkun" + }, + { + "author": "80sVectorz", + "description": "Adds Static Primitives to ComfyUI. Mostly to work with reroute nodes", + "files": [ + "https://github.com/80sVectorz/ComfyUI-Static-Primitives" + ], + "install_type": "git-clone", + "reference": "https://github.com/80sVectorz/ComfyUI-Static-Primitives", + "title": "ComfyUI-Static-Primitives" + }, + { + "author": "zcfrank1st", + "description": "Nodes:Preview Json, Save Json, Test Json Preview, ... preview and save nodes", + "files": [ + "https://github.com/zcfrank1st/Comfyui-Toolbox" + ], + "install_type": "git-clone", + "reference": "https://github.com/zcfrank1st/Comfyui-Toolbox", + "title": "Comfyui-Toolbox" + }, + { + "author": "talesofai", + "description": "This is an image/video/workflow browser and manager for ComfyUI. You could add image/video/workflow to collections and load it to ComfyUI. You will be able to use your collections everywhere.", + "files": [ + "https://github.com/talesofai/comfyui-browser" + ], + "install_type": "git-clone", + "reference": "https://github.com/talesofai/comfyui-browser", + "title": "ComfyUI Browser" + }, + { + "author": "yolain", + "description": "To enhance the usability of ComfyUI, optimizations and integrations have been implemented for several commonly used nodes.", + "files": [ + "https://github.com/yolain/ComfyUI-Easy-Use" + ], + "install_type": "git-clone", + "reference": "https://github.com/yolain/ComfyUI-Easy-Use", + "title": "ComfyUI Easy Use" + }, + { + "author": "bruefire", + "description": "This is an extension node for ComfyUI that allows you to load frames from a video in bulk and perform masking and sketching on each frame through a GUI.", + "files": [ + "https://github.com/bruefire/ComfyUI-SeqImageLoader" + ], + "install_type": "git-clone", + "reference": "https://github.com/bruefire/ComfyUI-SeqImageLoader", + "title": "ComfyUI Sequential Image Loader" + }, + { + "author": "mmaker", + "description": "Node: Color Enhance, Color Blend. This is the same algorithm GIMP/GEGL uses for color enhancement. The gist of this implementation is that it converts the color space to CIELCh(ab) and normalizes the chroma (or [colorfulness](https://en.wikipedia.org/wiki/Colorfulness)] component. Original source can be found in the link below.", + "files": [ + "https://git.mmaker.moe/mmaker/sd-webui-color-enhance" + ], + "install_type": "git-clone", + "reference": "https://git.mmaker.moe/mmaker/sd-webui-color-enhance", + "title": "Color Enhance" + }, + { + "author": "modusCell", + "description": "Simple node for sharing latent image size between nodes. Preset dimensions for SD and XL.", + "files": [ + "https://github.com/modusCell/ComfyUI-dimension-node-modusCell" + ], + "install_type": "git-clone", + "reference": "https://github.com/modusCell/ComfyUI-dimension-node-modusCell", + "title": "Preset Dimensions" + }, + { + "author": "aria1th", + "description": "Nodes:UniformRandomFloat..., RandomShuffleInt, YieldableIterator..., LogicGate..., Add..., MergeString, MemoryNode, ...", + "files": [ + "https://github.com/aria1th/ComfyUI-LogicUtils" + ], + "install_type": "git-clone", + "reference": "https://github.com/aria1th/ComfyUI-LogicUtils", + "title": "ComfyUI-LogicUtils" + }, + { + "author": "MitoshiroPJ", + "description": "This custom node allow controlling output without training. The reducing method is similar to [a/Spatial-Reduction Attention](https://paperswithcode.com/method/spatial-reduction-attention), but generating speed may not be increased on typical image sizes due to overheads. (In some cases, slightly slower)", + "files": [ + "https://github.com/MitoshiroPJ/comfyui_slothful_attention" + ], + "install_type": "git-clone", + "reference": "https://github.com/MitoshiroPJ/comfyui_slothful_attention", + "title": "ComfyUI Slothful Attention" + }, + { + "author": "brianfitzgerald", + "description": "Implementation of the [a/StyleAligned](https://style-aligned-gen.github.io/) paper for ComfyUI. This node allows you to apply a consistent style to all images in a batch; by default it will use the first image in the batch as the style reference, forcing all other images to be consistent with it.", + "files": [ + "https://github.com/brianfitzgerald/style_aligned_comfy" + ], + "install_type": "git-clone", + "reference": "https://github.com/brianfitzgerald/style_aligned_comfy", + "title": "StyleAligned for ComfyUI" + }, + { + "author": "deroberon", + "description": "The Demofusion Custom Node is a wrapper that adapts the work and implementation of the [a/DemoFusion](https://ruoyidu.github.io/demofusion/demofusion.html) technique created and implemented by Ruoyi Du to the Comfyui environment.", + "files": [ + "https://github.com/deroberon/demofusion-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/deroberon/demofusion-comfyui", + "title": "demofusion-comfyui" + }, + { + "author": "deroberon", + "description": "StableZero123 is a node wrapper that uses the model and technique provided [here](https://github.com/SUDO-AI-3D/zero123plus/). It uses the Zero123plus model to generate 3D views using just one image.", + "files": [ + "https://github.com/deroberon/StableZero123-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/deroberon/StableZero123-comfyui", + "title": "StableZero123-comfyui" + }, + { + "author": "glifxyz", + "description": "Nodes:Consistency VAE Decoder.", + "files": [ + "https://github.com/glifxyz/ComfyUI-GlifNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/glifxyz/ComfyUI-GlifNodes", + "title": "ComfyUI-GlifNodes" + }, + { + "author": "concarne000", + "description": "Nodes:Bing Image Grabber node for ComfyUI.", + "files": [ + "https://github.com/concarne000/ConCarneNode" + ], + "install_type": "git-clone", + "reference": "https://github.com/concarne000/ConCarneNode", + "title": "ConCarneNode" + }, + { + "author": "Aegis72", + "description": "These nodes will be placed in comfyui/custom_nodes/aegisflow and contains the image passer (accepts an image as either wired or wirelessly, input and passes it through. Latent passer does the same for latents, and the Preprocessor chooser allows a passthrough image and 10 controlnets to be passed in AegisFlow Shima. The inputs on the Preprocessor chooser should not be renamed if you intend to accept image inputs wirelessly through UE nodes. It can be done, but the send node input regex for each controlnet preprocessor column must also be changed.", + "files": [ + "https://github.com/aegis72/aegisflow_utility_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/aegis72/aegisflow_utility_nodes", + "title": "AegisFlow Utility Nodes" + }, + { + "author": "Aegis72", + "description": "This is a straight clone of Azazeal04's all-in-one styler menu, which was removed from gh on Jan 21, 2024. I have made no changes to the files at all.", + "files": [ + "https://github.com/aegis72/comfyui-styles-all" + ], + "install_type": "git-clone", + "reference": "https://github.com/aegis72/comfyui-styles-all", + "title": "ComfyUI-styles-all" + }, + { + "author": "glibsonoran", + "description": "Nodes: Style Prompt, OAI Dall_e Image. Plush contains two OpenAI enabled nodes: Style Prompt: Takes your prompt and the art style you specify and generates a prompt from ChatGPT3 or 4 that Stable Diffusion can use to generate an image in that style. OAI Dall_e 3: Takes your prompt and parameters and produces a Dall_e3 image in ComfyUI.", + "files": [ + "https://github.com/glibsonoran/Plush-for-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/glibsonoran/Plush-for-ComfyUI", + "title": "Plush-for-ComfyUI" + }, + { + "author": "vienteck", + "description": "This extension is a reimagined version based on the [a/ComfyUI-QualityOfLifeSuit_Omar92](https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92) extension, and it supports integration with ChatGPT through the new OpenAI API.\nNOTE: See detailed installation instructions on the [a/repository](https://github.com/vienteck/ComfyUI-Chat-GPT-Integration).", + "files": [ + "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration" + ], + "install_type": "git-clone", + "reference": "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration", + "title": "ComfyUI-Chat-GPT-Integration" + }, + { + "author": "MNeMoNiCuZ", + "description": "Nodes:Save Text File", + "files": [ + "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes", + "title": "ComfyUI-mnemic-nodes" + }, + { + "author": "AI2lab", + "description": "Integrate non-painting capabilities into comfyUI, including data, algorithms, video processing, large models, etc., to facilitate the construction of more powerful workflows.", + "files": [ + "https://github.com/AI2lab/comfyUI-tool-2lab" + ], + "install_type": "git-clone", + "reference": "https://github.com/AI2lab/comfyUI-tool-2lab", + "title": "comfyUI-tool-2lab" + }, + { + "author": "SpaceKendo", + "description": "This is node replaces the init_image conditioning for the [a/Stable Video Diffusion](https://github.com/Stability-AI/generative-models) image to video model with text embeds, together with a conditioning frame. The conditioning frame is a set of latents.", + "files": [ + "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid" + ], + "install_type": "git-clone", + "reference": "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid", + "title": "Text to video for Stable Video Diffusion in ComfyUI" + }, + { + "author": "NimaNzrii", + "description": "popup preview for comfyui", + "files": [ + "https://github.com/NimaNzrii/comfyui-popup_preview" + ], + "install_type": "git-clone", + "reference": "https://github.com/NimaNzrii/comfyui-popup_preview", + "title": "comfyui-popup_preview" + }, + { + "author": "NimaNzrii", + "description": "Photoshop node inside of ComfyUi, send and get data from Photoshop", + "files": [ + "https://github.com/NimaNzrii/comfyui-photoshop" + ], + "install_type": "git-clone", + "reference": "https://github.com/NimaNzrii/comfyui-photoshop", + "title": "comfyui-photoshop" + }, + { + "author": "Rui", + "description": "Rui's workflow-specific custom node, written using GPT.", + "files": [ + "https://github.com/rui40000/RUI-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/rui40000/RUI-Nodes", + "title": "RUI-Nodes" + }, + { + "author": "dmarx", + "description": "ComfyUI nodes to facilitate parameter/prompt keyframing using comfyui nodes for defining and manipulating parameter curves. Essentially provides a ComfyUI interface to the [a/keyframed](https://github.com/dmarx/keyframed) library.", + "files": [ + "https://github.com/dmarx/ComfyUI-Keyframed" + ], + "install_type": "git-clone", + "reference": "https://github.com/dmarx/ComfyUI-Keyframed", + "title": "ComfyUI-Keyframed" + }, + { + "author": "dmarx", + "description": "porting audioreactivity pipeline from vktrs to comfyui.", + "files": [ + "https://github.com/dmarx/ComfyUI-AudioReactive" + ], + "install_type": "git-clone", + "reference": "https://github.com/dmarx/ComfyUI-AudioReactive", + "title": "ComfyUI-AudioReactive" + }, + { + "author": "TripleHeadedMonkey", + "description": "This extension provides various SDXL Prompt Stylers. See: [a/youtube](https://youtu.be/WBHI-2uww7o?si=dijvDaUI4nmx4VkF)", + "files": [ + "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler" + ], + "install_type": "git-clone", + "reference": "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler", + "title": "ComfyUI_MileHighStyler" + }, + { + "author": "BennyKok", + "description": "Open source comfyui deployment platform, a vercel for generative workflow infra.", + "files": [ + "https://github.com/BennyKok/comfyui-deploy" + ], + "install_type": "git-clone", + "reference": "https://github.com/BennyKok/comfyui-deploy", + "title": "ComfyUI Deploy" + }, + { + "author": "florestefano1975", + "description": "ComfyUI Portrait Master. A node designed to help AI image creators to generate prompts for human portraits.", + "files": [ + "https://github.com/florestefano1975/comfyui-portrait-master" + ], + "install_type": "git-clone", + "reference": "https://github.com/florestefano1975/comfyui-portrait-master", + "title": "comfyui-portrait-master" + }, + { + "author": "florestefano1975", + "description": "A suite of tools for prompt management. Combining nodes helps the user sequence strings for prompts, also creating logical groupings if necessary. Individual nodes can be chained together in any order.", + "files": [ + "https://github.com/florestefano1975/comfyui-prompt-composer" + ], + "install_type": "git-clone", + "reference": "https://github.com/florestefano1975/comfyui-prompt-composer", + "title": "comfyui-prompt-composer" + }, + { + "author": "mozman", + "description": "This extension provides styler nodes for SDXL.\n\nNOTE: Due to the dynamic nature of node name definitions, ComfyUI-Manager cannot recognize the node list from this extension. The Missing nodes and Badge features are not available for this extension.", + "files": [ + "https://github.com/mozman/ComfyUI_mozman_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/mozman/ComfyUI_mozman_nodes", + "title": "ComfyUI_mozman_nodes" + }, + { + "author": "rcsaquino", + "description": "Nodes: VAE Processor, VAE Loader, Background Remover", + "files": [ + "https://github.com/rcsaquino/comfyui-custom-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/rcsaquino/comfyui-custom-nodes", + "title": "rcsaquino/comfyui-custom-nodes" + }, + { + "author": "rcfcu2000", + "description": "Nodes: Combine ZHGMasks, Cover ZHGMasks, ZHG FaceIndex, ZHG SaveImage, ZHG SmoothEdge, ZHG GetMaskArea, ...", + "files": [ + "https://github.com/rcfcu2000/zhihuige-nodes-comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/rcfcu2000/zhihuige-nodes-comfyui", + "title": "zhihuige-nodes-comfyui" + }, + { + "author": "IDGallagher", + "description": "Custom nodes to aid in the exploration of Latent Space", + "files": [ + "https://github.com/IDGallagher/ComfyUI-IG-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/IDGallagher/ComfyUI-IG-Nodes", + "title": "IG Interpolation Nodes" + }, + { + "author": "violet-chen", + "description": "Nodes: Psd2Png.", + "files": [ + "https://github.com/violet-chen/comfyui-psd2png" + ], + "install_type": "git-clone", + "reference": "https://github.com/violet-chen/comfyui-psd2png", + "title": "comfyui-psd2png" + }, + { + "author": "lldacing", + "description": "Nodes: Base64 To Image, Image To Base64, Load Image To Base64.", + "files": [ + "https://github.com/lldacing/comfyui-easyapi-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/lldacing/comfyui-easyapi-nodes", + "title": "comfyui-easyapi-nodes" + }, + { + "author": "CosmicLaca", + "description": "This extension provides various utility nodes. Inputs(prompt, styles, dynamic, merger, ...), Outputs(style pile), Dashboard(selectors, loader, switch, ...), Networks(LORA, Embedding, Hypernetwork), Visuals(visual selectors, )", + "files": [ + "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes", + "title": "Primere nodes for ComfyUI" + }, + { + "author": "RenderRift", + "description": "Nodes:RR_Date_Folder_Format, RR_Image_Metadata_Overlay, RR_VideoPathMetaExtraction, RR_DisplayMetaOptions. This extension provides nodes designed to enhance the Animatediff workflow.", + "files": [ + "https://github.com/RenderRift/ComfyUI-RenderRiftNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/RenderRift/ComfyUI-RenderRiftNodes", + "title": "ComfyUI-RenderRiftNodes" + }, + { + "author": "OpenArt-AI", + "description": "ComfyUI Assistant is your one stop plugin for everything you need to get started with comfy-ui. Now it provides useful courses, tutorials, and basic templates.", + "files": [ + "https://github.com/OpenArt-AI/ComfyUI-Assistant" + ], + "install_type": "git-clone", + "reference": "https://github.com/OpenArt-AI/ComfyUI-Assistant", + "title": "ComfyUI Assistant" + }, + { + "author": "ttulttul", + "description": "Nodes: Iterative Mixing KSampler, Batch Unsampler, Iterative Mixing KSampler Advanced", + "files": [ + "https://github.com/ttulttul/ComfyUI-Iterative-Mixer" + ], + "install_type": "git-clone", + "reference": "https://github.com/ttulttul/ComfyUI-Iterative-Mixer", + "title": "ComfyUI Iterative Mixing Nodes" + }, + { + "author": "ttulttul", + "description": "This repo contains nodes for ComfyUI that implement some helpful operations on tensors, such as normalization.", + "files": [ + "https://github.com/ttulttul/ComfyUI-Tensor-Operations" + ], + "install_type": "git-clone", + "reference": "https://github.com/ttulttul/ComfyUI-Tensor-Operations", + "title": "ComfyUI-Tensor-Operations" + }, + { + "author": "jitcoder", + "description": "Shows Lora information from CivitAI and outputs trigger words and example prompt", + "files": [ + "https://github.com/jitcoder/lora-info" + ], + "install_type": "git-clone", + "reference": "https://github.com/jitcoder/lora-info", + "title": "LoraInfo" + }, + { + "author": "ceruleandeep", + "description": "A ComfyUI extension for chatting with your images. Runs on your own system, no external services used, no filter. Uses the [a/LLaVA multimodal LLM](https://llava-vl.github.io/) so you can give instructions or ask questions in natural language. It's maybe as smart as GPT3.5, and it can see.", + "files": [ + "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner" + ], + "install_type": "git-clone", + "reference": "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner", + "title": "ComfyUI LLaVA Captioner" + }, + { + "author": "styler00dollar", + "description": "Directly upscaling inside the latent space. Model was trained for SD1.5 and drawn content. Might add new architectures or update models at some point. This took heavy inspriration from [city96/SD-Latent-Upscaler](https://github.com/city96/SD-Latent-Upscaler) and [Ttl/ComfyUi_NNLatentUpscale](https://github.com/Ttl/ComfyUi_NNLatentUpscale). ", + "files": [ + "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale" + ], + "install_type": "git-clone", + "reference": "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale", + "title": "ComfyUI-sudo-latent-upscale" + }, + { + "author": "styler00dollar", + "description": "This extension provides nodes for [a/DeepCache: Accelerating Diffusion Models for Free](https://arxiv.org/abs/2312.00858)\nNOTE:Original code can be found [a/here](https://gist.github.com/laksjdjf/435c512bc19636e9c9af4ee7bea9eb86). Full credit to laksjdjf for sharing the code. ", + "files": [ + "https://github.com/styler00dollar/ComfyUI-deepcache" + ], + "install_type": "git-clone", + "reference": "https://github.com/styler00dollar/ComfyUI-deepcache", + "title": "ComfyUI-deepcache" + }, + { + "author": "HarroweD and quadmoon", + "description": "Harronode is a custom node designed to build prompts easily for use with the Harrlogos SDXL LoRA. This Node simplifies the process of crafting prompts and makes all built in activation terms available at your fingertips.", + "files": [ + "https://github.com/NotHarroweD/Harronode" + ], + "install_type": "git-clone", + "nodename_pattern": "Harronode", + "reference": "https://github.com/NotHarroweD/Harronode", + "title": "Harronode" + }, + { + "author": "Limitex", + "description": "Nodes: Center Calculation. Improved Numerical Calculation for ComfyUI", + "files": [ + "https://github.com/Limitex/ComfyUI-Calculation" + ], + "install_type": "git-clone", + "reference": "https://github.com/Limitex/ComfyUI-Calculation", + "title": "ComfyUI-Calculation" + }, + { + "author": "Limitex", + "description": "This extension enables the use of the diffuser pipeline in ComfyUI.", + "files": [ + "https://github.com/Limitex/ComfyUI-Diffusers" + ], + "install_type": "git-clone", + "reference": "https://github.com/Limitex/ComfyUI-Diffusers", + "title": "ComfyUI-Diffusers" + }, + { + "author": "edenartlab", + "description": "Nodes:CLIP Interrogator, ...", + "files": [ + "https://github.com/edenartlab/eden_comfy_pipelines" + ], + "install_type": "git-clone", + "reference": "https://github.com/edenartlab/eden_comfy_pipelines", + "title": "eden_comfy_pipelines" + }, + { + "author": "pkpk", + "description": "A custom node on ComfyUI that saves images in AVIF format. Workflow can be loaded from images saved at this node.", + "files": [ + "https://github.com/pkpkTech/ComfyUI-SaveAVIF" + ], + "install_type": "git-clone", + "reference": "https://github.com/pkpkTech/ComfyUI-SaveAVIF", + "title": "ComfyUI-SaveAVIF" + }, + { + "author": "pkpkTech", + "description": "Use ngrok to allow external access to ComfyUI.\nNOTE: Need to manually modify a token inside the __init__.py file.", + "files": [ + "https://github.com/pkpkTech/ComfyUI-ngrok" + ], + "install_type": "git-clone", + "reference": "https://github.com/pkpkTech/ComfyUI-ngrok", + "title": "ComfyUI-ngrok" + }, + { + "author": "pkpk", + "description": "This is a custom node of ComfyUI that downloads and loads models from the input URL. The model is temporarily downloaded into memory and not saved to storage.\nThis could be useful when trying out models or when using various models on machines with limited storage. Since the model is downloaded into memory, expect higher memory usage than usual.", + "files": [ + "https://github.com/pkpkTech/ComfyUI-TemporaryLoader" + ], + "install_type": "git-clone", + "reference": "https://github.com/pkpkTech/ComfyUI-TemporaryLoader", + "title": "ComfyUI-TemporaryLoader" + }, + { + "author": "pkpkTech", + "description": "Add a button to the menu to save and load the running queue and the pending queues.\nThis is intended to be used when you want to exit ComfyUI with queues still remaining.", + "files": [ + "https://github.com/pkpkTech/ComfyUI-SaveQueues" + ], + "install_type": "git-clone", + "reference": "https://github.com/pkpkTech/ComfyUI-SaveQueues", + "title": "ComfyUI-SaveQueues" + }, + { + "author": "Crystian", + "description": "With this suit, you can see the resources monitor, progress bar & time elapsed, metadata and compare between two images, compare between two JSONs, show any value to console/display, pipes, and more!\nThis provides better nodes to load/save images, previews, etc, and see \"hidden\" data without loading a new workflow.", + "files": [ + "https://github.com/crystian/ComfyUI-Crystools" + ], + "install_type": "git-clone", + "reference": "https://github.com/crystian/ComfyUI-Crystools", + "title": "Crystools" + }, + { + "author": "Crystian", + "description": "With this quality of life extension, you can save your workflow with a specific name and include additional details such as the author, a description, and the version (in metadata/json). Important: When you share your workflow (via png/json), others will be able to see your information!", + "files": [ + "https://github.com/crystian/ComfyUI-Crystools-save" + ], + "install_type": "git-clone", + "reference": "https://github.com/crystian/ComfyUI-Crystools-save", + "title": "Crystools-save" + }, + { + "author": "Kangkang625", + "description": "This repo is a simple implementation of [a/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example) based on its [a/huggingface pipeline](https://huggingface.co/Fantasy-Studio/Paint-by-Example).", + "files": [ + "https://github.com/Kangkang625/ComfyUI-paint-by-example" + ], + "install_type": "git-clone", + "pip": [ + "diffusers" + ], + "reference": "https://github.com/Kangkang625/ComfyUI-paint-by-example", + "title": "ComfyUI-Paint-by-Example" + }, + { + "author": "54rt1n", + "description": "Merge two checkpoint models by dare ties [a/(https://github.com/yule-BUAA/MergeLM)](https://github.com/yule-BUAA/MergeLM), sort of.", + "files": [ + "https://github.com/54rt1n/ComfyUI-DareMerge" + ], + "install_type": "git-clone", + "reference": "https://github.com/54rt1n/ComfyUI-DareMerge", + "title": "ComfyUI-DareMerge" + }, + { + "author": "an90ray", + "description": "Nodes: RErouter, String (RE), Int (RE)", + "files": [ + "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes", + "title": "ComfyUI_RErouter_CustomNodes" + }, + { + "author": "jesenzhang", + "description": "This is a simple implementation StreamDiffusion(A Pipeline-Level Solution for Real-Time Interactive Generation) for ComfyUI", + "files": [ + "https://github.com/jesenzhang/ComfyUI_StreamDiffusion" + ], + "install_type": "git-clone", + "reference": "https://github.com/jesenzhang/ComfyUI_StreamDiffusion", + "title": "ComfyUI_StreamDiffusion" + }, + { + "author": "ai-liam", + "description": "Nodes: LiamLoadImage. This node provides the capability to load images from a URL.", + "files": [ + "https://github.com/ai-liam/comfyui_liam_util" + ], + "install_type": "git-clone", + "reference": "https://github.com/ai-liam/comfyui_liam_util", + "title": "LiamUtil" + }, + { + "author": "Ryuukeisyou", + "description": "This is a set of custom nodes for ComfyUI. The nodes utilize the [a/face parsing model](https://huggingface.co/jonathandinu/face-parsing) to provide detailed segmantation of face. To improve face segmantation accuracy, [a/yolov8 face model](https://huggingface.co/Bingsu/adetailer/) is used to first extract face from an image. There are also auxiliary nodes for image and mask processing. A guided filter is also provided for skin smoothing.", + "files": [ + "https://github.com/Ryuukeisyou/comfyui_face_parsing" + ], + "install_type": "git-clone", + "reference": "https://github.com/Ryuukeisyou/comfyui_face_parsing", + "title": "comfyui_face_parsing" + }, + { + "author": "tocubed", + "description": "Nodes: Shadertoy, Load Audio (from Path), Audio Frame Transform (Shadertoy), Audio Frame Transform (Beats)", + "files": [ + "https://github.com/tocubed/ComfyUI-AudioReactor" + ], + "install_type": "git-clone", + "reference": "https://github.com/tocubed/ComfyUI-AudioReactor", + "title": "ComfyUI-AudioReactor" + }, + { + "author": "ntc-ai", + "description": "An experiment about combining multiple LoRAs with [a/DARE](https://arxiv.org/pdf/2311.03099.pdf)", + "files": [ + "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge" + ], + "install_type": "git-clone", + "reference": "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge", + "title": "ComfyUI - Apply LoRA Stacker with DARE" + }, + { + "author": "wwwins", + "description": "Nodes:SimpleAspectRatio", + "files": [ + "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio" + ], + "install_type": "git-clone", + "reference": "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio", + "title": "ComfyUI-Simple-Aspect-Ratio" + }, + { + "author": "ownimage", + "description": "Nodes:Caching Image Loader.", + "files": [ + "https://github.com/ownimage/ComfyUI-ownimage" + ], + "install_type": "git-clone", + "reference": "https://github.com/ownimage/ComfyUI-ownimage", + "title": "ComfyUI-ownimage" + }, + { + "author": "Millyarde", + "description": "Photoshop custom nodes inside of ComfyUi, send and get data via Photoshop UXP plugin for cross platform support", + "files": [ + "https://github.com/Millyarde/Pomfy" + ], + "install_type": "git-clone", + "reference": "https://github.com/Millyarde/Pomfy", + "title": "Pomfy - Photoshop and ComfyUI 2-way sync" + }, + { + "author": "Ryuukeisyou", + "description": "Nodes:ImageLoadFromBase64, ImageLoadByPath, ImageLoadAsMaskByPath, ImageSaveToPath, ImageSaveAsBase64.", + "files": [ + "https://github.com/Ryuukeisyou/comfyui_image_io_helpers" + ], + "install_type": "git-clone", + "reference": "https://github.com/Ryuukeisyou/comfyui_image_io_helpers", + "title": "comfyui_image_io_helpers" + }, + { + "author": "flowtyone", + "description": "This is a custom node that lets you take advantage of Latent Diffusion Super Resolution (LDSR) models inside ComfyUI.", + "files": [ + "https://github.com/flowtyone/ComfyUI-Flowty-LDSR" + ], + "install_type": "git-clone", + "reference": "https://github.com/flowtyone/ComfyUI-Flowty-LDSR", + "title": "ComfyUI-Flowty-LDSR" + }, + { + "author": "massao000", + "description": "Aspect ratio selector for ComfyUI based on [a/sd-webui-ar](https://github.com/alemelis/sd-webui-ar?tab=readme-ov-file).", + "files": [ + "https://github.com/massao000/ComfyUI_aspect_ratios" + ], + "install_type": "git-clone", + "reference": "https://github.com/massao000/ComfyUI_aspect_ratios", + "title": "ComfyUI_aspect_ratios" + }, + { + "author": "SiliconFlow", + "description": "[a/Onediff](https://github.com/siliconflow/onediff) ComfyUI Nodes.", + "files": [ + "https://github.com/siliconflow/onediff_comfy_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/siliconflow/onediff_comfy_nodes", + "title": "OneDiff Nodes" + }, + { + "author": "ZHO-ZHO-ZHO", + "description": "Prompt Visualization | Art Gallery\n[w/WARN: Installation requires 2GB of space, and it will involve a long download time.]", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery" + ], + "install_type": "git-clone", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery", + "title": "ComfyUI-ArtGallery" + }, + { + "author": "hinablue", + "description": "Nodes:3D Pose Editor", + "files": [ + "https://github.com/hinablue/ComfyUI_3dPoseEditor" + ], + "install_type": "git-clone", + "reference": "https://github.com/hinablue/ComfyUI_3dPoseEditor", + "title": "ComfyUI 3D Pose Editor" + }, + { + "author": "chaojie", + "description": "Better Dynamic, Higher Resolution, and Stronger Coherence!", + "files": [ + "https://github.com/chaojie/ComfyUI-DynamiCrafter" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-DynamiCrafter", + "title": "ComfyUI-DynamiCrafter" + }, + { + "author": "chaojie", + "description": "ComfyUI 3d engine", + "files": [ + "https://github.com/chaojie/ComfyUI-Panda3d" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-Panda3d", + "title": "ComfyUI-Panda3d" + }, + { + "author": "chaojie", + "description": "Pymunk is a easy-to-use pythonic 2d physics library that can be used whenever you need 2d rigid body physics from Python", + "files": [ + "https://github.com/chaojie/ComfyUI-Pymunk" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-Pymunk", + "title": "ComfyUI-Pymunk" + }, + { + "author": "chaojie", + "description": "Nodes: Download the weights of MotionCtrl [a/motionctrl.pth](https://huggingface.co/TencentARC/MotionCtrl/blob/main/motionctrl.pth) and put it to ComfyUI/models/checkpoints", + "files": [ + "https://github.com/chaojie/ComfyUI-MotionCtrl" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-MotionCtrl", + "title": "ComfyUI-MotionCtrl" + }, + { + "author": "chaojie", + "description": "Nodes: that we currently provide the package only for x86-64 linux, such as Ubuntu or Debian, and Python 3.8, 3.9, and 3.10.", + "files": [ + "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor", + "title": "ComfyUI-Motion-Vector-Extractor" + }, + { + "author": "chaojie", + "description": "Nodes: Download the weights of MotionCtrl-SVD [a/motionctrl_svd.ckpt](https://huggingface.co/TencentARC/MotionCtrl/blob/main/motionctrl_svd.ckpt) and put it to ComfyUI/models/checkpoints", + "files": [ + "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD", + "title": "ComfyUI-MotionCtrl-SVD" + }, + { + "author": "chaojie", + "description": "Nodes: Download the weights of DragNUWA [a/drag_nuwa_svd.pth](https://drive.google.com/file/d/1Z4JOley0SJCb35kFF4PCc6N6P1ftfX4i/view) and put it to ComfyUI/models/checkpoints/drag_nuwa_svd.pth\n[w/Due to changes in the torch package and versions of many other packages, it may disrupt your installation environment.]", + "files": [ + "https://github.com/chaojie/ComfyUI-DragNUWA" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-DragNUWA", + "title": "ComfyUI-DragNUWA" + }, + { + "author": "chaojie", + "description": "Nodes: Run python tools/download_weights.py first to download weights automatically", + "files": [ + "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone", + "title": "ComfyUI-Moore-AnimateAnyone" + }, + { + "author": "chaojie", + "description": "This is an implementation of [a/i2vgen-xl](https://github.com/ali-vilab/i2vgen-xl)", + "files": [ + "https://github.com/chaojie/ComfyUI-I2VGEN-XL" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-I2VGEN-XL", + "title": "ComfyUI-I2VGEN-XL" + }, + { + "author": "chaojie", + "description": "This is an ComfyUI implementation of LightGlue to generate motion brush", + "files": [ + "https://github.com/chaojie/ComfyUI-LightGlue" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-LightGlue", + "title": "ComfyUI-LightGlue" + }, + { + "author": "chaojie", + "description": "This is an ComfyUI implementation of RAFT to generate motion brush", + "files": [ + "https://github.com/chaojie/ComfyUI-RAFT" + ], + "install_type": "git-clone", + "reference": "https://github.com/chaojie/ComfyUI-RAFT", + "title": "ComfyUI-RAFT" + }, + { + "author": "alexopus", + "description": "Allows you to save images with their generation metadata compatible with Civitai. Works with png, jpeg and webp. Stores LoRAs, models and embeddings hashes for resource recognition.", + "files": [ + "https://github.com/alexopus/ComfyUI-Image-Saver" + ], + "install_type": "git-clone", + "reference": "https://github.com/alexopus/ComfyUI-Image-Saver", + "title": "ComfyUI Image Saver" + }, + { + "author": "kft334", + "description": "Nodes: Image(s) To Websocket (Base64), Load Image (Base64),Load Images (Base64)", + "files": [ + "https://github.com/kft334/Knodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/kft334/Knodes", + "title": "Knodes" + }, + { + "author": "MrForExample", + "description": "An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, etc.)\nNOTE: Pre-built python wheels can be download from [a/https://github.com/remsky/ComfyUI3D-Assorted-Wheels](https://github.com/remsky/ComfyUI3D-Assorted-Wheels)", + "files": [ + "https://github.com/MrForExample/ComfyUI-3D-Pack" + ], + "install_type": "git-clone", + "nodename_pattern": "^\\[Comfy3D\\]", + "reference": "https://github.com/MrForExample/ComfyUI-3D-Pack", + "title": "ComfyUI-3D-Pack" + }, + { + "author": "Mr.ForExample", + "description": "Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video.\nThe current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!\ud83d\ude80\n[w/The torch environment may be compromised due to version issues as some torch-related packages are being reinstalled.]", + "files": [ + "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved" + ], + "install_type": "git-clone", + "nodename_pattern": "^\\[AnimateAnyone\\]", + "reference": "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved", + "title": "ComfyUI-AnimateAnyone-Evolved" + }, + { + "author": "Hangover3832", + "description": "Nodes: MS kosmos-2 Interrogator, Save Image w/o Metadata, Image Scale Bounding Box. An implementation of Microsoft [a/kosmos-2](https://huggingface.co/microsoft/kosmos-2-patch14-224) image to text transformer.", + "files": [ + "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes", + "title": "ComfyUI-Hangover-Nodes" + }, + { + "author": "Hangover3832", + "description": "Moondream is a lightweight multimodal large languge model.\nIMPORTANT:According to the creator, Moondream is for research purposes only, commercial use is not allowed!\n[w/WARN:Additional python code will be downloaded from huggingface and executed. You have to trust this creator if you want to use this node!]", + "files": [ + "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream" + ], + "install_type": "git-clone", + "reference": "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream", + "title": "ComfyUI-Hangover-Moondream" + }, + { + "author": "tzwm", + "description": "Calculate the execution time of all nodes.", + "files": [ + "https://github.com/tzwm/comfyui-profiler" + ], + "install_type": "git-clone", + "reference": "https://github.com/tzwm/comfyui-profiler", + "title": "ComfyUI Profiler" + }, + { + "author": "Daniel Lewis", + "description": "This is a set of nodes to interact with llama-cpp-python", + "files": [ + "https://github.com/daniel-lewis-ab/ComfyUI-Llama" + ], + "install_type": "git-clone", + "reference": "https://github.com/daniel-lewis-ab/ComfyUI-Llama", + "title": "ComfyUI-Llama" + }, + { + "author": "Daniel Lewis", + "description": "Text To Speech (TTS) for ComfyUI", + "files": [ + "https://github.com/daniel-lewis-ab/ComfyUI-TTS" + ], + "install_type": "git-clone", + "reference": "https://github.com/daniel-lewis-ab/ComfyUI-TTS", + "title": "ComfyUI-TTS" + }, + { + "author": "djbielejeski", + "description": "Extension for Automatic1111 and ComfyUI to automatically create masks for Background/Hair/Body/Face/Clothes in Img2Img", + "files": [ + "https://github.com/djbielejeski/a-person-mask-generator" + ], + "install_type": "git-clone", + "reference": "https://github.com/djbielejeski/a-person-mask-generator", + "title": "a-person-mask-generator" + }, + { + "author": "smagnetize", + "description": "Nodes:SingleImageDataUrlLoader", + "files": [ + "https://github.com/smagnetize/kb-comfyui-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/smagnetize/kb-comfyui-nodes", + "title": "kb-comfyui-nodes" + }, + { + "author": "ginlov", + "description": "Nodes:SegToMask", + "files": [ + "https://github.com/ginlov/segment_to_mask_comfyui" + ], + "install_type": "git-clone", + "reference": "https://github.com/ginlov/segment_to_mask_comfyui", + "title": "segment_to_mask_comfyui" + }, + { + "author": "glowcone", + "description": "Nodes: LoadImageFromBase64. Loads an image and its transparency mask from a base64-encoded data URI for easy API connection.", + "files": [ + "https://github.com/glowcone/comfyui-base64-to-image" + ], + "install_type": "git-clone", + "reference": "https://github.com/glowcone/comfyui-base64-to-image", + "title": "Load Image From Base64 URI" + }, + { + "author": "AInseven", + "description": "fastblend for comfyui, and other nodes that I write for video2video. rebatch image, my openpose", + "files": [ + "https://github.com/AInseven/ComfyUI-fastblend" + ], + "install_type": "git-clone", + "reference": "https://github.com/AInseven/ComfyUI-fastblend", + "title": "ComfyUI-fastblend" + }, + { + "author": "HebelHuber", + "description": "Nodes:Enhanced Save Node", + "files": [ + "https://github.com/HebelHuber/comfyui-enhanced-save-node" + ], + "install_type": "git-clone", + "reference": "https://github.com/HebelHuber/comfyui-enhanced-save-node", + "title": "comfyui-enhanced-save-node" + }, + { + "author": "LarryJane491", + "description": "If you see this message, your ComfyUI-Manager is outdated.\nRecent channel provides only the list of the latest nodes. If you want to find the complete node list, please go to the Default channel.\nMaking LoRA has never been easier!", + "files": [ + "https://github.com/LarryJane491/Lora-Training-in-Comfy" + ], + "install_type": "git-clone", + "reference": "https://github.com/LarryJane491/Lora-Training-in-Comfy", + "title": "Lora-Training-in-Comfy" + }, + { + "author": "LarryJane491", + "description": "The LoRA Caption custom nodes, just like their name suggests, allow you to caption images so they are ready for LoRA training.", + "files": [ + "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI", + "title": "Image-Captioning-in-ComfyUI" + }, + { + "author": "Layer-norm", + "description": "A very simple ComfyUI node to remove item with mask.", + "files": [ + "https://github.com/Layer-norm/comfyui-lama-remover" + ], + "install_type": "git-clone", + "reference": "https://github.com/Layer-norm/comfyui-lama-remover", + "title": "Comfyui lama remover" + }, + { + "author": "Taremin", + "description": "Instead of LoraLoader or HypernetworkLoader, it receives a prompt and loads and applies LoRA or HN based on the specifications within the prompt. The main purpose of this custom node is to allow changes without reconnecting the LoraLoader node when the prompt is randomly altered, etc.", + "files": [ + "https://github.com/Taremin/comfyui-prompt-extranetworks" + ], + "install_type": "git-clone", + "reference": "https://github.com/Taremin/comfyui-prompt-extranetworks", + "title": "ComfyUI Prompt ExtraNetworks" + }, + { + "author": "Taremin", + "description": " This extension provides the StringToolsConcat node, which concatenates multiple texts, and the StringToolsRandomChoice node, which selects one randomly from multiple texts.", + "files": [ + "https://github.com/Taremin/comfyui-string-tools" + ], + "install_type": "git-clone", + "reference": "https://github.com/Taremin/comfyui-string-tools", + "title": "ComfyUI String Tools" + }, + { + "author": "Taremin", + "description": "Make it possible to edit the prompt using the Monaco Editor, an editor implementation used in VSCode.\nNOTE: This extension supports both ComfyUI and A1111 simultaneously.", + "files": [ + "https://github.com/Taremin/webui-monaco-prompt" + ], + "install_type": "git-clone", + "reference": "https://github.com/Taremin/webui-monaco-prompt", + "title": "WebUI Monaco Prompt" + }, + { + "author": "foxtrot-roger", + "description": "A bunch of nodes that can be useful to manipulate primitive types (numbers, text, ...) Also some helpers to generate text and timestamps.", + "files": [ + "https://github.com/foxtrot-roger/comfyui-rf-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/foxtrot-roger/comfyui-rf-nodes", + "title": "RF Nodes" + }, + { + "author": "abyz22", + "description": "Nodes:abyz22_Padding Image, abyz22_ImpactWildcardEncode, abyz22_setimageinfo, abyz22_SaveImage, abyz22_ImpactWildcardEncode_GetPrompt, abyz22_SetQueue, abyz22_drawmask, abyz22_FirstNonNull, abyz22_blendimages, abyz22_blend_onecolor. Please check workflow in [a/https://github.com/abyz22/image_control](https://github.com/abyz22/image_control)", + "files": [ + "https://github.com/abyz22/image_control" + ], + "install_type": "git-clone", + "reference": "https://github.com/abyz22/image_control", + "title": "image_control" + }, + { + "author": "HAL41", + "description": "Simple node to handle scaling of YOLOv8 segmentation masks", + "files": [ + "https://github.com/HAL41/ComfyUI-aichemy-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/HAL41/ComfyUI-aichemy-nodes", + "title": "ComfyUI aichemy nodes" + }, + { + "author": "nkchocoai", + "description": "Add a node that outputs width and height of the size selected from the preset (.csv).", + "files": [ + "https://github.com/nkchocoai/ComfyUI-SizeFromPresets" + ], + "install_type": "git-clone", + "reference": "https://github.com/nkchocoai/ComfyUI-SizeFromPresets", + "title": "ComfyUI-SizeFromPresets" + }, + { + "author": "nkchocoai", + "description": "Nodes: Format String, Join String List, Load Preset, Load Preset (Advanced), Const String, Const String (multi line). Add useful nodes related to prompt.", + "files": [ + "https://github.com/nkchocoai/ComfyUI-PromptUtilities" + ], + "install_type": "git-clone", + "reference": "https://github.com/nkchocoai/ComfyUI-PromptUtilities", + "title": "ComfyUI-PromptUtilities" + }, + { + "author": "nkchocoai", + "description": "Add a node for drawing text with CR Draw Text of ComfyUI_Comfyroll_CustomNodes to the area of SEGS detected by Ultralytics Detector of ComfyUI-Impact-Pack.", + "files": [ + "https://github.com/nkchocoai/ComfyUI-TextOnSegs" + ], + "install_type": "git-clone", + "reference": "https://github.com/nkchocoai/ComfyUI-TextOnSegs", + "title": "ComfyUI-TextOnSegs" + }, + { + "author": "nkchocoai", + "description": "Add a node to save images with metadata (PNGInfo) extracted from the input values of each node.\nSince the values are extracted dynamically, values output by various extension nodes can be added to metadata.", + "files": [ + "https://github.com/nkchocoai/ComfyUI-SaveImageWithMetaData" + ], + "install_type": "git-clone", + "reference": "https://github.com/nkchocoai/ComfyUI-SaveImageWithMetaData", + "title": "ComfyUI-SaveImageWithMetaData" + }, + { + "author": "JaredTherriault", + "description": "python and web UX improvements for ComfyUI.\n[w/'DynamicPrompts.js' and 'EditAttention.js' from the core, along with 'ImageFeed.js' and 'favicon.js' from the custom scripts of pythongosssss, are not compatible. Therefore, manual deletion of these files is required to use this web extension.]", + "files": [ + "https://github.com/JaredTherriault/ComfyUI-JNodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/JaredTherriault/ComfyUI-JNodes", + "title": "ComfyUI-JNodes" + }, + { + "author": "prozacgod", + "description": "A simple, quick, and dirty implementation of multiple workspaces within ComfyUI.", + "files": [ + "https://github.com/prozacgod/comfyui-pzc-multiworkspace" + ], + "install_type": "git-clone", + "reference": "https://github.com/prozacgod/comfyui-pzc-multiworkspace", + "title": "ComfyUI Multi-Workspace" + }, + { + "author": "Siberpone", + "description": "A pony prompt helper extension for AUTOMATIC1111's Stable Diffusion Web UI and ComfyUI that utilizes the full power of your favorite booru query syntax. Currently supports [a/Derpibooru](https://derpibooru/org) and [a/E621](https://e621.net/).", + "files": [ + "https://github.com/Siberpone/lazy-pony-prompter" + ], + "install_type": "git-clone", + "reference": "https://github.com/Siberpone/lazy-pony-prompter", + "title": "Lazy Pony Prompter" + }, + { + "author": "chflame163", + "description": "A set of nodes for ComfyUI it generate image like Adobe Photoshop's Layer Style. the Drop Shadow is first completed node, and follow-up work is in progress.", + "files": [ + "https://github.com/chflame163/ComfyUI_LayerStyle" + ], + "install_type": "git-clone", + "reference": "https://github.com/chflame163/ComfyUI_LayerStyle", + "title": "ComfyUI Layer Style" + }, + { + "author": "dave-palt", + "description": "Nodes: DSP Image Concat", + "files": [ + "https://github.com/dave-palt/comfyui_DSP_imagehelpers" + ], + "install_type": "git-clone", + "reference": "https://github.com/dave-palt/comfyui_DSP_imagehelpers", + "title": "comfyui_DSP_imagehelpers" + }, + { + "author": "Inzaniak", + "description": "Ranbooru is an extension for the comfyUI. The purpose of this extension is to add a node that gets a random set of tags from boorus pictures. This is mostly being used to help me test my checkpoints on a large variety of", + "files": [ + "https://github.com/Inzaniak/comfyui-ranbooru" + ], + "install_type": "git-clone", + "reference": "https://github.com/Inzaniak/comfyui-ranbooru", + "title": "Ranbooru for ComfyUI" + }, + { + "author": "miosp", + "description": "A node for JPEG de-artifacting using [a/FBCNN](https://github.com/jiaxi-jiang/FBCNN).", + "files": [ + "https://github.com/Miosp/ComfyUI-FBCNN" + ], + "install_type": "git-clone", + "reference": "https://github.com/Miosp/ComfyUI-FBCNN", + "title": "ComfyUI-FBCNN" + }, + { + "author": "JcandZero", + "description": "GLM4 Vision Integration", + "files": [ + "https://github.com/JcandZero/ComfyUI_GLM4Node" + ], + "install_type": "git-clone", + "reference": "https://github.com/JcandZero/ComfyUI_GLM4Node", + "title": "ComfyUI_GLM4Node" + }, + { + "author": "darkpixel", + "description": "Slightly better random prompt generation tools that allow combining and picking prompts from both file and text input sources.", + "files": [ + "https://github.com/darkpixel/darkprompts" + ], + "install_type": "git-clone", + "reference": "https://github.com/darkpixel/darkprompts", + "title": "DarkPrompts" + }, + { + "author": "shiimizu", + "description": "ComfyUI reference implementation for [a/PhotoMaker](https://github.com/TencentARC/PhotoMaker) models. [w/WARN:The repository name has been changed. For those who have previously installed it, please delete custom_nodes/ComfyUI-PhotoMaker from disk and reinstall this.]", + "files": [ + "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus" + ], + "install_type": "git-clone", + "reference": "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus", + "title": "ComfyUI PhotoMaker Plus" + }, + { + "author": "Qais Malkawi", + "description": "This Extension adds a few custom QOL nodes that ComfyUI lacks by default.", + "files": [ + "https://github.com/QaisMalkawi/ComfyUI-QaisHelper" + ], + "install_type": "git-clone", + "reference": "https://github.com/QaisMalkawi/ComfyUI-QaisHelper", + "title": "ComfyUI-Qais-Helper" + }, + { + "author": "longgui0318", + "description": "Nodes:Split Masks", + "files": [ + "https://github.com/longgui0318/comfyui-mask-util" + ], + "install_type": "git-clone", + "reference": "https://github.com/longgui0318/comfyui-mask-util", + "title": "comfyui-mask-util" + }, + { + "author": "DimaChaichan", + "description": "This exporter is a plugin for ComfyUI, which can export tasks for [a/LAizypainter](https://github.com/DimaChaichan/LAizypainter).\nLAizypainter is a Photoshop plugin with which you can send tasks directly to a Stable Diffusion server. More information about a [a/Task](https://github.com/DimaChaichan/LAizypainter?tab=readme-ov-file#task)", + "files": [ + "https://github.com/DimaChaichan/LAizypainter-Exporter-ComfyUI" + ], + "install_type": "git-clone", + "reference": "https://github.com/DimaChaichan/LAizypainter-Exporter-ComfyUI", + "title": "LAizypainter-Exporter-ComfyUI" + }, + { + "author": "adriflex", + "description": "Nodes:Blender viewport color, Blender Viewport depth", + "files": [ + "https://github.com/adriflex/ComfyUI_Blender_Texdiff" + ], + "install_type": "git-clone", + "reference": "https://github.com/adriflex/ComfyUI_Blender_Texdiff", + "title": "ComfyUI_Blender_Texdiff" + }, + { + "author": "Shraknard", + "description": "Custom node for ComfyUI that makes parts of the image transparent (face, background...)", + "files": [ + "https://github.com/Shraknard/ComfyUI-Remover" + ], + "install_type": "git-clone", + "reference": "https://github.com/Shraknard/ComfyUI-Remover", + "title": "ComfyUI-Remover" + }, + { + "author": "Abdullah Ozmantar", + "description": "A quick and easy ComfyUI custom nodes for ultra-quality, lightning-speed face swapping of humans.", + "files": [ + "https://github.com/abdozmantar/ComfyUI-InstaSwap" + ], + "install_type": "git-clone", + "reference": "https://github.com/abdozmantar/ComfyUI-InstaSwap", + "title": "InstaSwap Face Swap Node for ComfyUI" + }, + { + "author": "FlyingFireCo", + "description": "Nodes:Tiled KSampler, Asymmetric Tiled KSampler, Circular VAEDecode.", + "files": [ + "https://github.com/FlyingFireCo/tiled_ksampler" + ], + "install_type": "git-clone", + "reference": "https://github.com/FlyingFireCo/tiled_ksampler", + "title": "tiled_ksampler" + }, + { + "author": "Nlar", + "description": "Front end ComfyUI nodes for CartoonSegmentation Based upon the work of the CartoonSegmentation repository this project will provide a front end to some of the features.", + "files": [ + "https://github.com/Nlar/ComfyUI_CartoonSegmentation" + ], + "install_type": "git-clone", + "reference": "https://github.com/Nlar/ComfyUI_CartoonSegmentation", + "title": "ComfyUI_CartoonSegmentation" + }, + { + "author": "godspede", + "description": "Just a simple substring node that takes text and length as input, and outputs the first length characters.", + "files": [ + "https://github.com/godspede/ComfyUI_Substring" + ], + "install_type": "git-clone", + "reference": "https://github.com/godspede/ComfyUI_Substring", + "title": "ComfyUI Substring" + }, + { + "author": "gokayfem", + "description": "Nodes:VisionQuestionAnswering Node, PromptGenerate Node", + "files": [ + "https://github.com/gokayfem/ComfyUI_VLM_nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/gokayfem/ComfyUI_VLM_nodes", + "title": "VLM_nodes" + }, + { + "author": "Hiero207", + "description": "Nodes:Post to Discord w/ Webhook", + "files": [ + "https://github.com/Hiero207/ComfyUI-Hiero-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Hiero207/ComfyUI-Hiero-Nodes", + "title": "ComfyUI-Hiero-Nodes" + }, + { + "author": "azure-dragon-ai", + "description": "Nodes:ImageScore, Loader, Image Processor, Real Image Processor, Fake Image Processor, Text Processor. ComfyUI Nodes for ClipScore", + "files": [ + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes", + "title": "ComfyUI-ClipScore-Nodes" + }, + { + "author": "yuvraj108c", + "description": "Transcribe audio and add subtitles to videos using Whisper in ComfyUI", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Whisper" + ], + "install_type": "git-clone", + "reference": "https://github.com/yuvraj108c/ComfyUI-Whisper", + "title": "ComfyUI Whisper" + }, + { + "author": "blepping", + "description": "Better TAESD previews, BlehHyperTile.", + "files": [ + "https://github.com/blepping/ComfyUI-bleh" + ], + "install_type": "git-clone", + "reference": "https://github.com/blepping/ComfyUI-bleh", + "title": "ComfyUI-bleh" + }, + { + "author": "blepping", + "description": "A janky implementation of Sonar sampling (momentum-based sampling) for ComfyUI.", + "files": [ + "https://github.com/blepping/ComfyUI-sonar" + ], + "install_type": "git-clone", + "reference": "https://github.com/blepping/ComfyUI-sonar", + "title": "ComfyUI-sonar" + }, + { + "author": "JerryOrbachJr", + "description": "A ComfyUI custom node that randomly selects a height and width pair from a list in a config file", + "files": [ + "https://github.com/JerryOrbachJr/ComfyUI-RandomSize" + ], + "install_type": "git-clone", + "reference": "https://github.com/JerryOrbachJr/ComfyUI-RandomSize", + "title": "ComfyUI-RandomSize" + }, + { + "author": "jamal-alkharrat", + "description": "ComfyUI Custom Node to Rotate Images, Img2Img node.", + "files": [ + "https://github.com/jamal-alkharrat/ComfyUI_rotate_image" + ], + "install_type": "git-clone", + "reference": "https://github.com/jamal-alkharrat/ComfyUI_rotate_image", + "title": "ComfyUI_rotate_image" + }, + { + "author": "mape", + "description": "Multi-monitor image preview, Variable Assigment/Wireless Nodes, Prompt Tweaking, Command Palette, Pinned favourite nodes, Node navigation, Fuzzy search, Node time tracking, Organizing and Error management. For more info visit: [a/https://comfyui.ma.pe/](https://comfyui.ma.pe/)", + "files": [ + "https://github.com/mape/ComfyUI-mape-Helpers" + ], + "install_type": "git-clone", + "reference": "https://github.com/mape/ComfyUI-mape-Helpers", + "title": "mape's ComfyUI Helpers" + }, + { + "author": "zhongpei", + "description": "Nodes:Image to Text, Loader Image to Text Model.", + "files": [ + "https://github.com/zhongpei/Comfyui_image2prompt" + ], + "install_type": "git-clone", + "reference": "https://github.com/zhongpei/Comfyui_image2prompt", + "title": "Comfyui_image2prompt" + }, + { + "author": "zhongpei", + "description": "Enhancing Image Restoration", + "files": [ + "https://github.com/zhongpei/ComfyUI-InstructIR" + ], + "install_type": "git-clone", + "reference": "https://github.com/zhongpei/ComfyUI-InstructIR", + "title": "ComfyUI for InstructIR" + }, + { + "author": "Loewen-Hob", + "description": "This custom node is based on the [a/rembg-comfyui-node](https://github.com/Jcd1230/rembg-comfyui-node) but provides additional functionality to select ONNX models.", + "files": [ + "https://github.com/Loewen-Hob/rembg-comfyui-node-better" + ], + "install_type": "git-clone", + "reference": "https://github.com/Loewen-Hob/rembg-comfyui-node-better", + "title": "Rembg Background Removal Node for ComfyUI" + }, + { + "author": "HaydenReeve", + "description": "Strings should be easy, and simple. This extension aims to provide a set of nodes that make working with strings in ComfyUI a little bit easier.", + "files": [ + "https://github.com/HaydenReeve/ComfyUI-Better-Strings" + ], + "install_type": "git-clone", + "reference": "https://github.com/HaydenReeve/ComfyUI-Better-Strings", + "title": "ComfyUI Better Strings" + }, + { + "author": "StartHua", + "description": "Nodes:segformer_clothes, segformer_agnostic, segformer_remove_bg, stabel_vition. Nodes for model dress up.", + "files": [ + "https://github.com/StartHua/ComfyUI_Seg_VITON" + ], + "install_type": "git-clone", + "reference": "https://github.com/StartHua/ComfyUI_Seg_VITON", + "title": "ComfyUI_Seg_VITON" + }, + { + "author": "StartHua", + "description": "JoyTag is a state of the art AI vision model for tagging images, with a focus on sex positivity and inclusivity. It uses the Danbooru tagging schema, but works across a wide range of images, from hand drawn to photographic.\nDownload the weight and put it under checkpoints: [a/https://huggingface.co/fancyfeast/joytag/tree/main](https://huggingface.co/fancyfeast/joytag/tree/main)", + "files": [ + "https://github.com/StartHua/Comfyui_joytag" + ], + "install_type": "git-clone", + "reference": "https://github.com/StartHua/Comfyui_joytag", + "title": "Comfyui_joytag" + }, + { + "author": "StartHua", + "description": "SegFormer model fine-tuned on ATR dataset for clothes segmentation but can also be used for human segmentation!\nDownload the weight and put it under checkpoints: [a/https://huggingface.co/mattmdjaga/segformer_b2_clothes](https://huggingface.co/mattmdjaga/segformer_b2_clothes)", + "files": [ + "https://github.com/StartHua/Comfyui_segformer_b2_clothes" + ], + "install_type": "git-clone", + "reference": "https://github.com/StartHua/Comfyui_segformer_b2_clothes", + "title": "comfyui_segformer_b2_clothes" + }, + { + "author": "ricklove", + "description": "Nodes: Image Crop and Resize by Mask, Image Uncrop, Image Shadow, Optical Flow (Dip), Warp Image with Flow, Image Threshold (Channels), Finetune Variable, Finetune Analyze, Finetune Analyze Batch, ... Misc ComfyUI nodes by Rick Love", + "files": [ + "https://github.com/ricklove/comfyui-ricklove" + ], + "install_type": "git-clone", + "reference": "https://github.com/ricklove/comfyui-ricklove", + "title": "comfyui-ricklove" + }, + { + "author": "nosiu", + "description": "Implementation of [a/faceswap](https://github.com/nosiu/InstantID-faceswap/tree/main) based on [a/InstantID](https://github.com/InstantID/InstantID) for ComfyUI. Allows usage of [a/LCM Lora](https://huggingface.co/latent-consistency/lcm-lora-sdxl) which can produce good results in only a few generation steps.\nNOTE:Works ONLY with SDXL checkpoints.", + "files": [ + "https://github.com/nosiu/comfyui-instantId-faceswap" + ], + "install_type": "git-clone", + "reference": "https://github.com/nosiu/comfyui-instantId-faceswap", + "title": "ComfyUI InstantID Faceswapper" + }, + { + "author": "zhongpei", + "description": "Enhancing Image Restoration. (ref:[a/InstructIR](https://github.com/mv-lab/InstructIR))", + "files": [ + "https://github.com/zhongpei/ComfyUI-InstructIR" + ], + "install_type": "git-clone", + "reference": "https://github.com/zhongpei/ComfyUI-InstructIR", + "title": "ComfyUI for InstructIR" + }, + { + "author": "LyazS", + "description": "A Anime Character Segmentation node for comfyui, based on [this hf space](https://huggingface.co/spaces/skytnt/anime-remove-background).", + "files": [ + "https://github.com/LyazS/comfyui-anime-seg" + ], + "install_type": "git-clone", + "reference": "https://github.com/LyazS/comfyui-anime-seg", + "title": "Anime Character Segmentation node for comfyui" + }, + { + "author": "Chan-0312", + "description": "This is a project that generates videos frame by frame based on IPAdapter+ControlNet. Unlike [a/Steerable-motion](https://github.com/banodoco/Steerable-Motion), we do not rely on AnimateDiff. This decision is primarily due to the fact that the videos generated by AnimateDiff are often blurry. Through frame-by-frame control using IPAdapter+ControlNet, we can produce higher definition and more controllable videos.", + "files": [ + "https://github.com/Chan-0312/ComfyUI-IPAnimate" + ], + "install_type": "git-clone", + "reference": "https://github.com/Chan-0312/ComfyUI-IPAnimate", + "title": "ComfyUI-IPAnimate" + }, + { + "author": "trumanwong", + "description": "An implementation of NSFW Detection for ComfyUI", + "files": [ + "https://github.com/trumanwong/ComfyUI-NSFW-Detection" + ], + "install_type": "git-clone", + "reference": "https://github.com/trumanwong/ComfyUI-NSFW-Detection", + "title": "ComfyUI-NSFW-Detection" + }, + { + "author": "TemryL", + "description": "ComfyS3 seamlessly integrates with [a/Amazon S3](https://aws.amazon.com/en/s3/) in ComfyUI. This open-source project provides custom nodes for effortless loading and saving of images, videos, and checkpoint models directly from S3 buckets within the ComfyUI graph interface.", + "files": [ + "https://github.com/TemryL/ComfyS3" + ], + "install_type": "git-clone", + "reference": "https://github.com/TemryL/ComfyS3", + "title": "ComfyS3" + }, + { + "author": "davask", + "description": "This is a revised version of the Bus node from the [a/Was Node Suite](https://github.com/WASasquatch/was-node-suite-comfyui) to integrate more input/output.", + "files": [ + "https://github.com/davask/ComfyUI-MarasIT-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/davask/ComfyUI-MarasIT-Nodes", + "title": "MarasIT Nodes" + }, + { + "author": "yffyhk", + "description": "Nodes: Get Danbooru, Tag Encode", + "files": [ + "https://github.com/yffyhk/comfyui_auto_danbooru" + ], + "install_type": "git-clone", + "reference": "https://github.com/yffyhk/comfyui_auto_danbooru", + "title": "comfyui_auto_danbooru" + }, + { + "author": "dfl", + "description": "Clip text encoder with BREAK formatting like A1111 (uses conditioning concat)", + "files": [ + "https://github.com/dfl/comfyui-clip-with-break" + ], + "install_type": "git-clone", + "reference": "https://github.com/dfl/comfyui-clip-with-break", + "title": "comfyui-clip-with-break" + }, + { + "author": "MarkoCa1", + "description": "Mask cutout based on Segment Anything.", + "files": [ + "https://github.com/MarkoCa1/ComfyUI_Segment_Mask" + ], + "install_type": "git-clone", + "reference": "https://github.com/MarkoCa1/ComfyUI_Segment_Mask", + "title": "ComfyUI_Segment_Mask" + }, + { + "author": "antrobot", + "description": "A small node pack containing various things I felt like ought to be in base comfy-UI. Currently includes Some image handling nodes to help with inpainting, a version of KSampler (advanced) that allows for denoise, and a node that can swap it's inputs. Remember to make an issue if you experience any bugs or errors!", + "files": [ + "https://github.com/antrobot1234/antrobots-comfyUI-nodepack" + ], + "install_type": "git-clone", + "reference": "https://github.com/antrobot1234/antrobots-comfyUI-nodepack", + "title": "antrobots ComfyUI Nodepack" + }, + { + "author": "bilal-arikan", + "description": "With this node you can upload text files to input folder from your local computer.", + "files": [ + "https://github.com/bilal-arikan/ComfyUI_TextAssets" + ], + "install_type": "git-clone", + "reference": "https://github.com/bilal-arikan/ComfyUI_TextAssets", + "title": "ComfyUI_TextAssets" + }, + { + "author": "kadirnar", + "description": "ComfyUI-Transformers is a cutting-edge project combining the power of computer vision and natural language processing to create intuitive and user-friendly interfaces. Our goal is to make technology more accessible and engaging.", + "files": [ + "https://github.com/kadirnar/ComfyUI-Transformers" + ], + "install_type": "git-clone", + "reference": "https://github.com/kadirnar/ComfyUI-Transformers", + "title": "ComfyUI-Transformers" + }, + { + "author": "digitaljohn", + "description": "A set of custom ComfyUI nodes for performing basic post-processing effects including Film Grain and Vignette. These effects can help to take the edge off AI imagery and make them feel more natural.", + "files": [ + "https://github.com/digitaljohn/comfyui-propost" + ], + "install_type": "git-clone", + "reference": "https://github.com/digitaljohn/comfyui-propost", + "title": "ComfyUI-ProPost" + }, + { + "author": "DonBaronFactory", + "description": "Nodes:CRE8IT Serial Prompter, CRE8IT Apply Serial Prompter, CRE8IT Image Sizer. A few simple nodes to facilitate working wiht ComfyUI Workflows", + "files": [ + "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes", + "title": "ComfyUI-Cre8it-Nodes" + }, + { + "author": "deforum", + "description": "Official Deforum animation pipeline tools that provide a unique way to create frame-by-frame generative motion art.", + "files": [ + "https://github.com/XmYx/deforum-comfy-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/XmYx/deforum-comfy-nodes", + "title": "Deforum Nodes" + }, + { + "author": "adbrasi", + "description": "ComfyUI-TrashNodes-DownloadHuggingface is a ComfyUI node designed to facilitate the download of models you have just trained and uploaded to Hugging Face. This node is particularly useful for users who employ Google Colab for training and need to quickly download their models for deployment.", + "files": [ + "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface" + ], + "install_type": "git-clone", + "reference": "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface", + "title": "ComfyUI-TrashNodes-DownloadHuggingface" + }, + { + "author": "mbrostami", + "description": "ComfyUI Node to work with Hugging Face repositories", + "files": [ + "https://github.com/mbrostami/ComfyUI-HF" + ], + "install_type": "git-clone", + "reference": "https://github.com/mbrostami/ComfyUI-HF", + "title": "ComfyUI-HF" + }, + { + "author": "Billius-AI", + "description": "Nodes:Create Project Root, Add Folder, Add Folder Advanced, Add File Name Prefix, Add File Name Prefix Advanced, ShowPath", + "files": [ + "https://github.com/Billius-AI/ComfyUI-Path-Helper" + ], + "install_type": "git-clone", + "reference": "https://github.com/Billius-AI/ComfyUI-Path-Helper", + "title": "ComfyUI-Path-Helper" + }, + { + "author": "Franck-Demongin", + "description": "A custom node for ComfyUI to create a prompt based on a list of keywords saved in CSV files.", + "files": [ + "https://github.com/Franck-Demongin/NX_PromptStyler" + ], + "install_type": "git-clone", + "reference": "https://github.com/Franck-Demongin/NX_PromptStyler", + "title": "NX_PromptStyler" + }, + { + "author": "xiaoxiaodesha", + "description": "Nodes:Combine HDMasks, Cover HDMasks, HD FaceIndex, HD SmoothEdge, HD GetMaskArea, HD Image Levels, HD Ultimate SD Upscale", + "files": [ + "https://github.com/xiaoxiaodesha/hd_node" + ], + "install_type": "git-clone", + "reference": "https://github.com/xiaoxiaodesha/hd_node", + "title": "hd-nodes-comfyui" + }, + { + "author": "ShmuelRonen", + "description": "SVDResizer is a helper for resizing the source image, according to the sizes enabled in Stable Video Diffusion. The rationale behind the possibility of changing the size of the image in steps between the ranges of 576 and 1024, is the use of the greatest common denominator of these two numbers which is 64. SVD is lenient with resizing that adheres to this rule, so the chance of coherent video that is not the standard size of 576X1024 is greater. It is advisable to keep the value 1024 constant and play with the second size to maintain the stability of the result.", + "files": [ + "https://github.com/ShmuelRonen/ComfyUI-SVDResizer" + ], + "install_type": "git-clone", + "reference": "https://github.com/ShmuelRonen/ComfyUI-SVDResizer", + "title": "ComfyUI-SVDResizer" + }, + { + "author": "redhottensors", + "description": "Fully customizable Classifier Free Guidance for ComfyUI.", + "files": [ + "https://github.com/redhottensors/ComfyUI-Prediction" + ], + "install_type": "git-clone", + "reference": "https://github.com/redhottensors/ComfyUI-Prediction", + "title": "ComfyUI-Prediction" + }, + { + "author": "Mamaaaamooooo", + "description": "Remove background of plural images.", + "files": [ + "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes", + "title": "Batch Rembg for ComfyUI" + }, + { + "author": "jordoh", + "description": "ComfyUI nodes wrapping the [a/deepface](https://github.com/serengil/deepface) library.", + "files": [ + "https://github.com/jordoh/ComfyUI-Deepface" + ], + "install_type": "git-clone", + "reference": "https://github.com/jordoh/ComfyUI-Deepface", + "title": "ComfyUI Deepface" + }, + { + "author": "yuvraj108c", + "description": "A collection of nice utility nodes for ComfyUI", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Pronodes" + ], + "install_type": "git-clone", + "reference": "https://github.com/yuvraj108c/ComfyUI-Pronodes", + "title": "ComfyUI-Pronodes" + }, + { + "author": "GavChap", + "description": "Nodes:Cascade Resolutions", + "files": [ + "https://github.com/GavChap/ComfyUI-CascadeResolutions" + ], + "install_type": "git-clone", + "reference": "https://github.com/GavChap/ComfyUI-CascadeResolutions", + "title": "ComfyUI-CascadeResolutions" + }, + { + "author": "yuvraj108c", + "description": "Nodes:Upscale Video Tensorrt", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Vsgan" + ], + "install_type": "git-clone", + "reference": "https://github.com/yuvraj108c/ComfyUI-Vsgan", + "title": "ComfyUI-Vsgan" + }, + { + "author": "Ser-Hilary", + "description": "Nodes:sizing_node. Size calculation node related to image size in prompts supported by SDXL.", + "files": [ + "https://github.com/Ser-Hilary/SDXL_sizing/raw/main/conditioning_sizing_for_SDXL.py" + ], + "install_type": "copy", + "reference": "https://github.com/Ser-Hilary/SDXL_sizing", + "title": "SDXL_sizing" + }, + { + "author": "ailex000", + "description": "Custom javascript extensions for better UX for ComfyUI. Supported nodes: PreviewImage, SaveImage. Double click on image to open.", + "files": [ + "https://github.com/ailex000/ComfyUI-Extensions/raw/main/image-gallery/imageGallery.js" + ], + "install_type": "copy", + "js_path": "image-gallery", + "reference": "https://github.com/ailex000/ComfyUI-Extensions", + "title": "Image Gallery" + }, + { + "author": "rock-land", + "description": "ComfyUI Web Extension for saving views and navigating graphs.", + "files": [ + "https://github.com/rock-land/graphNavigator/raw/main/graphNavigator/graphNavigator.js" + ], + "install_type": "copy", + "js_path": "graphNavigator", + "reference": "https://github.com/rock-land/graphNavigator", + "title": "graphNavigator" + }, + { + "author": "diffus3", + "description": "Extensions: subgraph, setget, multiReroute", + "files": [ + "https://github.com/diffus3/ComfyUI-extensions/raw/main/multiReroute/multireroute.js", + "https://github.com/diffus3/ComfyUI-extensions/raw/main/setget/setget.js" + ], + "install_type": "copy", + "js_path": "diffus3", + "reference": "https://github.com/diffus3/ComfyUI-extensions", + "title": "diffus3/ComfyUI-extensions" + }, + { + "author": "m957ymj75urz", + "description": "Nodes: RawText, RawTextCLIPEncode, RawTextCombine, RawTextReplace, Extension: m957ymj75urz.colors", + "files": [ + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/clip-text-encode-split/clip_text_encode_split.py", + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/colors/colors.js" + ], + "install_type": "copy", + "js_path": "m957ymj75urz", + "reference": "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes", + "title": "m957ymj75urz/ComfyUI-Custom-Nodes" + }, + { + "author": "Bikecicle", + "description": "Some additional audio utilites for use on top of Sample Diffusion ComfyUI Extension", + "files": [ + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_AudioManipulation.py", + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_VariationUtils.py" + ], + "install_type": "copy", + "reference": "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions", + "title": "Waveform Extensions" + }, + { + "author": "dawangraoming", + "description": "KSampler is provided, based on GPU random noise", + "files": [ + "https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py" + ], + "install_type": "copy", + "reference": "https://github.com/dawangraoming/ComfyUI_ksampler_gpu", + "title": "KSampler GPU" + }, + { + "author": "fitCorder", + "description": "fcFloatMatic is a custom module, that when configured correctly will increment through the lines generating you loras at different strengths. The JSON file will load the config.", + "files": [ + "https://github.com/fitCorder/fcSuite/raw/main/fcSuite.py" + ], + "install_type": "copy", + "reference": "https://github.com/fitCorder/fcSuite", + "title": "fcSuite" + }, + { + "author": "lrzjason", + "description": "Nodes:SDXLMixSampler, LatentByRatio", + "files": [ + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/SDXLMixSampler.py", + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/LatentByRatio.py", + "" + ], + "install_type": "copy", + "reference": "https://github.com/lrzjason/ComfyUIJasonNode", + "title": "ComfyUIJasonNode" + }, + { + "author": "lordgasmic", + "description": "Nodes:CLIPTextEncodeWithWildcards. This wildcard node is a wildcard node that operates based on the seed.", + "files": [ + "https://github.com/lordgasmic/ComfyUI-Wildcards/raw/master/wildcards.py" + ], + "install_type": "copy", + "reference": "https://github.com/lordgasmic/ComfyUI-Wildcards", + "title": "Wildcards" + }, + { + "author": "throttlekitty", + "description": "A quick and easy ComfyUI custom node for setting SDXL-friendly aspect ratios.", + "files": [ + "https://raw.githubusercontent.com/throttlekitty/SDXLCustomAspectRatio/main/SDXLAspectRatio.py" + ], + "install_type": "copy", + "reference": "https://github.com/throttlekitty/SDXLCustomAspectRatio", + "title": "SDXLCustomAspectRatio" + }, + { + "author": "s1dlx", + "description": "Advanced merging methods.", + "files": [ + "https://github.com/s1dlx/comfy_meh/raw/main/meh.py" + ], + "install_type": "copy", + "reference": "https://github.com/s1dlx/comfy_meh", + "title": "comfy_meh" + }, + { + "author": "tudal", + "description": "Nodes: Prompt parser. ComfyUI extra nodes. Mostly prompt parsing.", + "files": [ + "https://github.com/tudal/Hakkun-ComfyUI-nodes/raw/main/hakkun_nodes.py" + ], + "install_type": "copy", + "reference": "https://github.com/tudal/Hakkun-ComfyUI-nodes", + "title": "Hakkun-ComfyUI-nodes" + }, + { + "author": "SadaleNet", + "description": "Nodes: CLIPTextEncodeA1111, RerouteTextForCLIPTextEncodeA1111.", + "files": [ + "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI/raw/master/custom_nodes/clip_text_encoder_a1111.py" + ], + "install_type": "copy", + "reference": "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI", + "title": "ComfyUI A1111-like Prompt Custom Node Solution" + }, + { + "author": "wsippel", + "description": "Nodes: SDXLResolutionPresets. Easy access to the officially supported resolutions, in both horizontal and vertical formats: 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640", + "files": [ + "https://github.com/wsippel/comfyui_ws/raw/main/sdxl_utility.py" + ], + "install_type": "copy", + "reference": "https://github.com/wsippel/comfyui_ws", + "title": "SDXLResolutionPresets" + }, + { + "author": "nicolai256", + "description": "Nodes: yugioh_Presets. by Nicolai256 inspired by throttlekitty SDXLAspectRatio", + "files": [ + "https://github.com/nicolai256/comfyUI_Nodes_nicolai256/raw/main/yugioh-presets.py" + ], + "install_type": "copy", + "reference": "https://github.com/nicolai256/comfyUI_Nodes_nicolai256", + "title": "comfyUI_Nodes_nicolai256" + }, + { + "author": "Onierous", + "description": "Nodes: QRNG Node CSV. A node that takes in an array of random numbers from the ANU QRNG API and stores them locally for generating quantum random number noise_seeds in ComfyUI", + "files": [ + "https://github.com/Onierous/QRNG_Node_ComfyUI/raw/main/qrng_node.py" + ], + "install_type": "copy", + "reference": "https://github.com/Onierous/QRNG_Node_ComfyUI", + "title": "QRNG_Node_ComfyUI" + }, + { + "author": "ntdviet", + "description": "Nodes:LatentGarbageCollector. This ComfyUI custom node flushes the GPU cache and empty cuda interprocess memory. It's helpfull for low memory environment such as the free Google Colab, especially when the workflow VAE decode latents of the size above 1500x1500.", + "files": [ + "https://github.com/ntdviet/comfyui-ext/raw/main/custom_nodes/gcLatentTunnel/gcLatentTunnel.py" + ], + "install_type": "copy", + "reference": "https://github.com/ntdviet/comfyui-ext", + "title": "ntdviet/comfyui-ext" + }, + { + "author": "alkemann", + "description": "Nodes:Int to Text, Seed With Text, Save A1 Image.", + "files": [ + "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68/raw/f9605be0b38d38d3e3a2988f89248ff557010076/alkemann.py" + ], + "install_type": "copy", + "reference": "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68", + "title": "alkemann nodes" + }, + { + "author": "catscandrive", + "description": "Adds an Image Loader node that also shows images in subfolders of the default input directory", + "files": [ + "https://github.com/catscandrive/comfyui-imagesubfolders/raw/main/loadImageWithSubfolders.py" + ], + "install_type": "copy", + "reference": "https://github.com/catscandrive/comfyui-imagesubfolders", + "title": "Image loader with subfolders" + }, + { + "author": "Smuzzies", + "description": "Nodes: Chatbox Overlay. Custom node for ComfyUI to add a text box over a processed image before save node.", + "files": [ + "https://github.com/Smuzzies/comfyui_chatbox_overlay/raw/main/chatbox_overlay.py" + ], + "install_type": "copy", + "reference": "https://github.com/Smuzzies/comfyui_chatbox_overlay", + "title": "Chatbox Overlay node for ComfyUI" + }, + { + "author": "CaptainGrock", + "description": "Nodes:Apply Invisible Watermark, Extract Watermark. Adds up to 12 characters encoded into an image that can be extracted.", + "files": [ + "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark/raw/main/Invisible%20Watermark.py" + ], + "install_type": "copy", + "reference": "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark", + "title": "ComfyUIInvisibleWatermark" + }, + { + "author": "fearnworks", + "description": "A collection of ComfyUI nodes. These nodes are tailored for specific tasks, such as counting files in directories and sorting text segments based on token counts. Currently this is only tested on SDXL 1.0 models. An additional swich is needed to hand 1.x", + "files": [ + "https://github.com/fearnworks/ComfyUI_FearnworksNodes/raw/main/fw_nodes.py" + ], + "install_type": "copy", + "reference": "https://github.com/fearnworks/ComfyUI_FearnworksNodes", + "title": "Fearnworks Custom Nodes" + }, + { + "author": "LZC", + "description": "Nodes:tensor_trans_pil, Make Transparent mask, MergeImages, words_generatee, load_PIL image", + "files": [ + "https://github.com/1shadow1/hayo_comfyui_nodes/raw/main/LZCNodes.py" + ], + "install_type": "copy", + "reference": "https://github.com/1shadow1/hayo_comfyui_nodes", + "title": "Hayo comfyui nodes" + }, + { + "author": "celsojr2013", + "description": "Nodes:Simple Gooogle Translator Client, Simple Mustache Parameter Switcher, Simple Latent Resolution Solver.", + "files": [ + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/google_translator.py", + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/parameters.py", + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/resolution_solver.py" + ], + "install_type": "copy", + "reference": "https://github.com/celsojr2013/comfyui_simpletools", + "title": "ComfyUI SimpleTools Suit" + }, + { + "author": "underclockeddev", + "description": "A node which takes in x, y, width, height, total width, and total height, in order to accurately represent the area of an image which is covered by area-based conditioning.", + "files": [ + "https://github.com/underclockeddev/ComfyUI-PreviewSubselection-Node/raw/master/preview_subselection.py" + ], + "install_type": "copy", + "reference": "https://github.com/underclockeddev/ComfyUI-PreviewSubselection-Node", + "title": "Preview Subselection Node for ComfyUI" + }, + { + "author": "AshMartian", + "description": "A collection of ComfyUI directory automation utility nodes. Directory Get It Right adds a GUI directory browser, and smart directory loop/iteration node that supports regex and file extension filtering.", + "files": [ + "https://github.com/AshMartian/ComfyUI-DirGir/raw/main/dir_picker.py", + "https://github.com/AshMartian/ComfyUI-DirGir/raw/main/dir_loop.py" + ], + "install_type": "copy", + "reference": "https://github.com/AshMartian/ComfyUI-DirGir", + "title": "Dir Gir" + }, + { + "author": "theally", + "description": "Custom nodes for ComfyUI by TheAlly.", + "files": [ + "https://civitai.com/api/download/models/25114", + "https://civitai.com/api/download/models/24679", + "https://civitai.com/api/download/models/24154", + "https://civitai.com/api/download/models/23884", + "https://civitai.com/api/download/models/23649", + "https://civitai.com/api/download/models/23467", + "https://civitai.com/api/download/models/23296" + ], + "install_type": "unzip", + "reference": "https://civitai.com/models/19625?modelVersionId=23296", + "title": "TheAlly's Custom Nodes" + }, + { + "author": "xss", + "description": "Various image processing nodes.", + "files": [ + "https://civitai.com/api/download/models/32717", + "https://civitai.com/api/download/models/47776", + "https://civitai.com/api/download/models/29772", + "https://civitai.com/api/download/models/31618", + "https://civitai.com/api/download/models/31591", + "https://civitai.com/api/download/models/29773", + "https://civitai.com/api/download/models/29774", + "https://civitai.com/api/download/models/29755", + "https://civitai.com/api/download/models/29750" + ], + "install_type": "unzip", + "reference": "https://civitai.com/models/24869/comfyui-custom-nodes-by-xss", + "title": "Custom Nodes by xss" + }, + { + "author": "aimingfail", + "description": "This is a node to convert an image into a CMYK Halftone dot image.", + "files": [ + "https://civitai.com/api/download/models/158997" + ], + "install_type": "unzip", + "reference": "https://civitai.com/models/143293/image2halftone-node-for-comfyui", + "title": "Image2Halftone Node for ComfyUI" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json b/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..f35bbe55d4b80b43bacc861e9cb9fae9fe862aa5 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json @@ -0,0 +1,8481 @@ +{ + "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68/raw/f9605be0b38d38d3e3a2988f89248ff557010076/alkemann.py": [ + [ + "Int to Text", + "Save A1 Image", + "Seed With Text" + ], + { + "title_aux": "alkemann nodes" + } + ], + "https://git.mmaker.moe/mmaker/sd-webui-color-enhance": [ + [ + "MMakerColorBlend", + "MMakerColorEnhance" + ], + { + "title_aux": "Color Enhance" + } + ], + "https://github.com/0xbitches/ComfyUI-LCM": [ + [ + "LCM_Sampler", + "LCM_Sampler_Advanced", + "LCM_img2img_Sampler", + "LCM_img2img_Sampler_Advanced" + ], + { + "title_aux": "Latent Consistency Model for ComfyUI" + } + ], + "https://github.com/1shadow1/hayo_comfyui_nodes/raw/main/LZCNodes.py": [ + [ + "LoadPILImages", + "MergeImages", + "make_transparentmask", + "tensor_trans_pil", + "words_generatee" + ], + { + "title_aux": "Hayo comfyui nodes" + } + ], + "https://github.com/42lux/ComfyUI-safety-checker": [ + [ + "Safety Checker" + ], + { + "title_aux": "ComfyUI-safety-checker" + } + ], + "https://github.com/54rt1n/ComfyUI-DareMerge": [ + [ + "DM_AdvancedDareModelMerger", + "DM_AdvancedModelMerger", + "DM_AttentionGradient", + "DM_BlockGradient", + "DM_BlockModelMerger", + "DM_DareClipMerger", + "DM_DareModelMergerBlock", + "DM_DareModelMergerElement", + "DM_DareModelMergerMBW", + "DM_GradientEdit", + "DM_GradientOperations", + "DM_GradientReporting", + "DM_InjectNoise", + "DM_LoRALoaderTags", + "DM_LoRAReporting", + "DM_MBWGradient", + "DM_MagnitudeMasker", + "DM_MaskEdit", + "DM_MaskOperations", + "DM_MaskReporting", + "DM_ModelReporting", + "DM_NormalizeModel", + "DM_QuadMasker", + "DM_ShellGradient", + "DM_SimpleMasker" + ], + { + "title_aux": "ComfyUI-DareMerge" + } + ], + "https://github.com/80sVectorz/ComfyUI-Static-Primitives": [ + [ + "FloatStaticPrimitive", + "IntStaticPrimitive", + "StringMlStaticPrimitive", + "StringStaticPrimitive" + ], + { + "title_aux": "ComfyUI-Static-Primitives" + } + ], + "https://github.com/AInseven/ComfyUI-fastblend": [ + [ + "FillDarkMask", + "InterpolateKeyFrame", + "MaskListcaptoBatch", + "MyOpenPoseNode", + "SmoothVideo", + "reBatchImage" + ], + { + "title_aux": "ComfyUI-fastblend" + } + ], + "https://github.com/AIrjen/OneButtonPrompt": [ + [ + "AutoNegativePrompt", + "CreatePromptVariant", + "OneButtonPreset", + "OneButtonPrompt", + "SavePromptToFile" + ], + { + "title_aux": "One Button Prompt" + } + ], + "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD": [ + [ + "APS_LatentBatch", + "APS_Seed", + "ContentMaskLatent", + "ControlNetScript", + "ControlnetUnit", + "GaussianLatentImage", + "GetConfig", + "LoadImageBase64", + "LoadImageWithMetaData", + "LoadLorasFromPrompt", + "MaskExpansion" + ], + { + "title_aux": "Comfy-Photoshop-SD" + } + ], + "https://github.com/AbyssYuan0/ComfyUI_BadgerTools": [ + [ + "ApplyMaskToImage-badger", + "CropImageByMask-badger", + "ExpandImageWithColor-badger", + "FindThickLinesFromCanny-badger", + "FloatToInt-badger", + "FloatToString-badger", + "FrameToVideo-badger", + "GarbageCollect-badger", + "GetColorFromBorder-badger", + "GetDirName-badger", + "GetUUID-badger", + "IdentifyBorderColorToMask-badger", + "IdentifyColorToMask-badger", + "ImageNormalization-badger", + "ImageOverlap-badger", + "ImageScaleToSide-badger", + "IntToString-badger", + "SegmentToMaskByPoint-badger", + "StringToFizz-badger", + "TextListToString-badger", + "TrimTransparentEdges-badger", + "VideoCutFromDir-badger", + "VideoToFrame-badger", + "deleteDir-badger", + "findCenterOfMask-badger", + "getImageSide-badger", + "getParentDir-badger", + "mkdir-badger" + ], + { + "title_aux": "ComfyUI_BadgerTools" + } + ], + "https://github.com/Acly/comfyui-inpaint-nodes": [ + [ + "INPAINT_ApplyFooocusInpaint", + "INPAINT_InpaintWithModel", + "INPAINT_LoadFooocusInpaint", + "INPAINT_LoadInpaintModel", + "INPAINT_MaskedBlur", + "INPAINT_MaskedFill", + "INPAINT_VAEEncodeInpaintConditioning" + ], + { + "title_aux": "ComfyUI Inpaint Nodes" + } + ], + "https://github.com/Acly/comfyui-tooling-nodes": [ + [ + "ETN_ApplyMaskToImage", + "ETN_CropImage", + "ETN_LoadImageBase64", + "ETN_LoadMaskBase64", + "ETN_SendImageWebSocket" + ], + { + "title_aux": "ComfyUI Nodes for External Tooling" + } + ], + "https://github.com/Amorano/Jovimetrix": [ + [], + { + "author": "amorano", + "description": "Webcams, GLSL shader, Media Streaming, Tick animation, Image manipulation,", + "nodename_pattern": " \\(jov\\)$", + "title": "Jovimetrix", + "title_aux": "Jovimetrix Composition Nodes" + } + ], + "https://github.com/ArtBot2023/CharacterFaceSwap": [ + [ + "Color Blend", + "Crop Face", + "Exclude Facial Feature", + "Generation Parameter Input", + "Generation Parameter Output", + "Image Full BBox", + "Load BiseNet", + "Load RetinaFace", + "Mask Contour", + "Segment Face", + "Uncrop Face" + ], + { + "title_aux": "Character Face Swap" + } + ], + "https://github.com/ArtVentureX/comfyui-animatediff": [ + [ + "AnimateDiffCombine", + "AnimateDiffLoraLoader", + "AnimateDiffModuleLoader", + "AnimateDiffSampler", + "AnimateDiffSlidingWindowOptions", + "ImageSizeAndBatchSize", + "LoadVideo" + ], + { + "title_aux": "AnimateDiff" + } + ], + "https://github.com/AustinMroz/ComfyUI-SpliceTools": [ + [ + "LogSigmas", + "RerangeSigmas", + "SpliceDenoised", + "SpliceLatents", + "TemporalSplice" + ], + { + "title_aux": "SpliceTools" + } + ], + "https://github.com/BadCafeCode/masquerade-nodes-comfyui": [ + [ + "Blur", + "Change Channel Count", + "Combine Masks", + "Constant Mask", + "Convert Color Space", + "Create QR Code", + "Create Rect Mask", + "Cut By Mask", + "Get Image Size", + "Image To Mask", + "Make Image Batch", + "Mask By Text", + "Mask Morphology", + "Mask To Region", + "MasqueradeIncrementer", + "Mix Color By Mask", + "Mix Images By Mask", + "Paste By Mask", + "Prune By Mask", + "Separate Mask Components", + "Unary Image Op", + "Unary Mask Op" + ], + { + "title_aux": "Masquerade Nodes" + } + ], + "https://github.com/Beinsezii/bsz-cui-extras": [ + [ + "BSZAbsoluteHires", + "BSZAspectHires", + "BSZColoredLatentImageXL", + "BSZCombinedHires", + "BSZHueChromaXL", + "BSZInjectionKSampler", + "BSZLatentDebug", + "BSZLatentFill", + "BSZLatentGradient", + "BSZLatentHSVAImage", + "BSZLatentOffsetXL", + "BSZLatentRGBAImage", + "BSZLatentbuster", + "BSZPixelbuster", + "BSZPixelbusterHelp", + "BSZPrincipledConditioning", + "BSZPrincipledSampler", + "BSZPrincipledScale", + "BSZStrangeResample" + ], + { + "title_aux": "bsz-cui-extras" + } + ], + "https://github.com/BennyKok/comfyui-deploy": [ + [ + "ComfyUIDeployExternalCheckpoint", + "ComfyUIDeployExternalImage", + "ComfyUIDeployExternalImageAlpha", + "ComfyUIDeployExternalLora", + "ComfyUIDeployExternalNumber", + "ComfyUIDeployExternalNumberInt", + "ComfyUIDeployExternalText" + ], + { + "author": "BennyKok", + "description": "", + "nickname": "Comfy Deploy", + "title": "comfyui-deploy", + "title_aux": "ComfyUI Deploy" + } + ], + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_AudioManipulation.py": [ + [ + "BatchJoinAudio", + "CutAudio", + "DuplicateAudio", + "JoinAudio", + "ResampleAudio", + "ReverseAudio", + "StretchAudio" + ], + { + "title_aux": "Waveform Extensions" + } + ], + "https://github.com/Billius-AI/ComfyUI-Path-Helper": [ + [ + "Add File Name Prefix", + "Add File Name Prefix Advanced", + "Add Folder", + "Add Folder Advanced", + "Create Project Root", + "Join Variables", + "Show Path", + "Show String" + ], + { + "title_aux": "ComfyUI-Path-Helper" + } + ], + "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb": [ + [ + "BNK_AddCLIPSDXLParams", + "BNK_AddCLIPSDXLRParams", + "BNK_CLIPTextEncodeAdvanced", + "BNK_CLIPTextEncodeSDXLAdvanced" + ], + { + "title_aux": "Advanced CLIP Text Encode" + } + ], + "https://github.com/BlenderNeko/ComfyUI_Cutoff": [ + [ + "BNK_CutoffBasePrompt", + "BNK_CutoffRegionsToConditioning", + "BNK_CutoffRegionsToConditioning_ADV", + "BNK_CutoffSetRegions" + ], + { + "title_aux": "ComfyUI Cutoff" + } + ], + "https://github.com/BlenderNeko/ComfyUI_Noise": [ + [ + "BNK_DuplicateBatchIndex", + "BNK_GetSigma", + "BNK_InjectNoise", + "BNK_NoisyLatentImage", + "BNK_SlerpLatent", + "BNK_Unsampler" + ], + { + "title_aux": "ComfyUI Noise" + } + ], + "https://github.com/BlenderNeko/ComfyUI_SeeCoder": [ + [ + "ConcatConditioning", + "SEECoderImageEncode" + ], + { + "title_aux": "SeeCoder [WIP]" + } + ], + "https://github.com/BlenderNeko/ComfyUI_TiledKSampler": [ + [ + "BNK_TiledKSampler", + "BNK_TiledKSamplerAdvanced" + ], + { + "title_aux": "Tiled sampling for ComfyUI" + } + ], + "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr": [ + [ + "CLIPIter", + "Dict2Model", + "GridImage", + "ImageBlend2", + "KSamplerOverrided", + "KSamplerSetting", + "KSamplerXYZ", + "LatentToHist", + "LatentToImage", + "ModelIter", + "RandomLatentImage", + "SaveStateDict", + "SaveText", + "StateDictLoader", + "StateDictMerger", + "StateDictMergerBlockWeighted", + "StateDictMergerBlockWeightedMulti", + "VAEDecodeBatched", + "VAEEncodeBatched", + "VAEIter" + ], + { + "title_aux": "ComfyUI-nodes-hnmr" + } + ], + "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark/raw/main/Invisible%20Watermark.py": [ + [ + "Apply Invisible Watermark", + "Extract Watermark" + ], + { + "title_aux": "ComfyUIInvisibleWatermark" + } + ], + "https://github.com/Chan-0312/ComfyUI-IPAnimate": [ + [ + "IPAdapterAnimate" + ], + { + "title_aux": "ComfyUI-IPAnimate" + } + ], + "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes": [ + [ + "ImageToPIL", + "LoadImageFromPath", + "PILToImage", + "PILToMask" + ], + { + "title_aux": "ComfyUI_Ib_CustomNodes" + } + ], + "https://github.com/Clybius/ComfyUI-Extra-Samplers": [ + [ + "SamplerCLYB_4M_SDE_Momentumized", + "SamplerCustomModelMixtureDuo", + "SamplerCustomNoise", + "SamplerCustomNoiseDuo", + "SamplerDPMPP_DualSDE_Momentumized", + "SamplerEulerAncestralDancing_Experimental", + "SamplerLCMCustom", + "SamplerRES_Momentumized", + "SamplerTTM" + ], + { + "title_aux": "ComfyUI Extra Samplers" + } + ], + "https://github.com/Clybius/ComfyUI-Latent-Modifiers": [ + [ + "Latent Diffusion Mega Modifier" + ], + { + "title_aux": "ComfyUI-Latent-Modifiers" + } + ], + "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes": [ + [ + "PrimereAnyDetailer", + "PrimereAnyOutput", + "PrimereCKPT", + "PrimereCKPTLoader", + "PrimereCLIPEncoder", + "PrimereClearPrompt", + "PrimereDynamicParser", + "PrimereEmbedding", + "PrimereEmbeddingHandler", + "PrimereEmbeddingKeywordMerger", + "PrimereEmotionsStyles", + "PrimereHypernetwork", + "PrimereImageSegments", + "PrimereKSampler", + "PrimereLCMSelector", + "PrimereLORA", + "PrimereLYCORIS", + "PrimereLatentNoise", + "PrimereLoraKeywordMerger", + "PrimereLoraStackMerger", + "PrimereLycorisKeywordMerger", + "PrimereLycorisStackMerger", + "PrimereMetaCollector", + "PrimereMetaRead", + "PrimereMetaSave", + "PrimereMidjourneyStyles", + "PrimereModelConceptSelector", + "PrimereModelKeyword", + "PrimereNetworkTagLoader", + "PrimerePrompt", + "PrimerePromptOrganizer", + "PrimerePromptSwitch", + "PrimereRefinerPrompt", + "PrimereResolution", + "PrimereResolutionMultiplier", + "PrimereResolutionMultiplierMPX", + "PrimereSamplers", + "PrimereSamplersSteps", + "PrimereSeed", + "PrimereStepsCfg", + "PrimereStyleLoader", + "PrimereStylePile", + "PrimereTextOutput", + "PrimereVAE", + "PrimereVAELoader", + "PrimereVAESelector", + "PrimereVisualCKPT", + "PrimereVisualEmbedding", + "PrimereVisualHypernetwork", + "PrimereVisualLORA", + "PrimereVisualLYCORIS", + "PrimereVisualStyle" + ], + { + "title_aux": "Primere nodes for ComfyUI" + } + ], + "https://github.com/Danand/ComfyUI-ComfyCouple": [ + [ + "Attention couple", + "Comfy Couple" + ], + { + "author": "Rei D.", + "description": "If you want to draw two different characters together without blending their features, so you could try to check out this custom node.", + "nickname": "Danand", + "title": "Comfy Couple", + "title_aux": "ComfyUI-ComfyCouple" + } + ], + "https://github.com/Davemane42/ComfyUI_Dave_CustomNode": [ + [ + "ABGRemover", + "ConditioningStretch", + "ConditioningUpscale", + "MultiAreaConditioning", + "MultiLatentComposite" + ], + { + "title_aux": "Visual Area Conditioning / Latent composition" + } + ], + "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes": [ + [ + "ABSNode_DF", + "Absolute value", + "Ceil", + "CeilNode_DF", + "Conditioning area scale by ratio", + "ConditioningSetArea with tuples", + "ConditioningSetAreaEXT_DF", + "ConditioningSetArea_DF", + "CosNode_DF", + "Cosines", + "Divide", + "DivideNode_DF", + "EmptyLatentImage_DF", + "Float", + "Float debug print", + "Float2Tuple_DF", + "FloatDebugPrint_DF", + "FloatNode_DF", + "Floor", + "FloorNode_DF", + "Get image size", + "Get latent size", + "GetImageSize_DF", + "GetLatentSize_DF", + "Image scale by ratio", + "Image scale to side", + "ImageScale_Ratio_DF", + "ImageScale_Side_DF", + "Int debug print", + "Int to float", + "Int to tuple", + "Int2Float_DF", + "IntDebugPrint_DF", + "Integer", + "IntegerNode_DF", + "Latent Scale by ratio", + "Latent Scale to side", + "LatentComposite with tuples", + "LatentScale_Ratio_DF", + "LatentScale_Side_DF", + "MultilineStringNode_DF", + "Multiply", + "MultiplyNode_DF", + "PowNode_DF", + "Power", + "Random", + "RandomFloat_DF", + "SinNode_DF", + "Sinus", + "SqrtNode_DF", + "Square root", + "String debug print", + "StringNode_DF", + "Subtract", + "SubtractNode_DF", + "Sum", + "SumNode_DF", + "TanNode_DF", + "Tangent", + "Text", + "Text box", + "Tuple", + "Tuple debug print", + "Tuple multiply", + "Tuple swap", + "Tuple to floats", + "Tuple to ints", + "Tuple2Float_DF", + "TupleDebugPrint_DF", + "TupleNode_DF" + ], + { + "title_aux": "Derfuu_ComfyUI_ModdedNodes" + } + ], + "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes": [ + [ + "ApplySerialPrompter", + "ImageSizer", + "SerialPrompter" + ], + { + "author": "CRE8IT GmbH", + "description": "This extension offers various nodes.", + "nickname": "cre8Nodes", + "title": "cr8SerialPrompter", + "title_aux": "ComfyUI-Cre8it-Nodes" + } + ], + "https://github.com/Electrofried/ComfyUI-OpenAINode": [ + [ + "OpenAINode" + ], + { + "title_aux": "OpenAINode" + } + ], + "https://github.com/EllangoK/ComfyUI-post-processing-nodes": [ + [ + "ArithmeticBlend", + "AsciiArt", + "Blend", + "Blur", + "CannyEdgeMask", + "ChromaticAberration", + "ColorCorrect", + "ColorTint", + "Dissolve", + "Dither", + "DodgeAndBurn", + "FilmGrain", + "Glow", + "HSVThresholdMask", + "KMeansQuantize", + "KuwaharaBlur", + "Parabolize", + "PencilSketch", + "PixelSort", + "Pixelize", + "Quantize", + "Sharpen", + "SineWave", + "Solarize", + "Vignette" + ], + { + "title_aux": "ComfyUI-post-processing-nodes" + } + ], + "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG": [ + [ + "Automatic CFG", + "Automatic CFG channels multipliers" + ], + { + "title_aux": "ComfyUI-AutomaticCFG" + } + ], + "https://github.com/Extraltodeus/LoadLoraWithTags": [ + [ + "LoraLoaderTagsQuery" + ], + { + "title_aux": "LoadLoraWithTags" + } + ], + "https://github.com/Extraltodeus/noise_latent_perlinpinpin": [ + [ + "NoisyLatentPerlin" + ], + { + "title_aux": "noise latent perlinpinpin" + } + ], + "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler": [ + [ + "Get sigmas as float", + "Graph sigmas", + "Manual scheduler", + "Merge sigmas by average", + "Merge sigmas gradually", + "Multiply sigmas", + "Split and concatenate sigmas", + "The Golden Scheduler" + ], + { + "title_aux": "sigmas_tools_and_the_golden_scheduler" + } + ], + "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation": [ + [ + "AMT VFI", + "CAIN VFI", + "EISAI VFI", + "FILM VFI", + "FLAVR VFI", + "GMFSS Fortuna VFI", + "IFRNet VFI", + "IFUnet VFI", + "KSampler Gradually Adding More Denoise (efficient)", + "M2M VFI", + "Make Interpolation State List", + "RIFE VFI", + "STMFNet VFI", + "Sepconv VFI" + ], + { + "title_aux": "ComfyUI Frame Interpolation" + } + ], + "https://github.com/Fannovel16/ComfyUI-Loopchain": [ + [ + "EmptyLatentImageLoop", + "FolderToImageStorage", + "ImageStorageExportLoop", + "ImageStorageImport", + "ImageStorageReset", + "LatentStorageExportLoop", + "LatentStorageImport", + "LatentStorageReset" + ], + { + "title_aux": "ComfyUI Loopchain" + } + ], + "https://github.com/Fannovel16/ComfyUI-MotionDiff": [ + [ + "EmptyMotionData", + "ExportSMPLTo3DSoftware", + "MotionCLIPTextEncode", + "MotionDataVisualizer", + "MotionDiffLoader", + "MotionDiffSimpleSampler", + "RenderSMPLMesh", + "SMPLLoader", + "SaveSMPL", + "SmplifyMotionData" + ], + { + "title_aux": "ComfyUI MotionDiff" + } + ], + "https://github.com/Fannovel16/ComfyUI-Video-Matting": [ + [ + "BRIAAI Matting", + "Robust Video Matting" + ], + { + "title_aux": "ComfyUI-Video-Matting" + } + ], + "https://github.com/Fannovel16/comfyui_controlnet_aux": [ + [ + "AIO_Preprocessor", + "AnimalPosePreprocessor", + "AnimeFace_SemSegPreprocessor", + "AnimeLineArtPreprocessor", + "BAE-NormalMapPreprocessor", + "BinaryPreprocessor", + "CannyEdgePreprocessor", + "ColorPreprocessor", + "DWPreprocessor", + "DensePosePreprocessor", + "DepthAnythingPreprocessor", + "DiffusionEdge_Preprocessor", + "FacialPartColoringFromPoseKps", + "FakeScribblePreprocessor", + "HEDPreprocessor", + "HintImageEnchance", + "ImageGenResolutionFromImage", + "ImageGenResolutionFromLatent", + "ImageIntensityDetector", + "ImageLuminanceDetector", + "InpaintPreprocessor", + "LeReS-DepthMapPreprocessor", + "LineArtPreprocessor", + "LineartStandardPreprocessor", + "M-LSDPreprocessor", + "Manga2Anime_LineArt_Preprocessor", + "MaskOptFlow", + "MediaPipe-FaceMeshPreprocessor", + "MeshGraphormer-DepthMapPreprocessor", + "MiDaS-DepthMapPreprocessor", + "MiDaS-NormalMapPreprocessor", + "OneFormer-ADE20K-SemSegPreprocessor", + "OneFormer-COCO-SemSegPreprocessor", + "OpenposePreprocessor", + "PiDiNetPreprocessor", + "PixelPerfectResolution", + "SAMPreprocessor", + "SavePoseKpsAsJsonFile", + "ScribblePreprocessor", + "Scribble_XDoG_Preprocessor", + "SemSegPreprocessor", + "ShufflePreprocessor", + "TEEDPreprocessor", + "TilePreprocessor", + "UniFormer-SemSegPreprocessor", + "Unimatch_OptFlowPreprocessor", + "Zoe-DepthMapPreprocessor", + "Zoe_DepthAnythingPreprocessor" + ], + { + "author": "tstandley", + "title_aux": "ComfyUI's ControlNet Auxiliary Preprocessors" + } + ], + "https://github.com/Feidorian/feidorian-ComfyNodes": [ + [], + { + "nodename_pattern": "^Feidorian_", + "title_aux": "feidorian-ComfyNodes" + } + ], + "https://github.com/Fictiverse/ComfyUI_Fictiverse": [ + [ + "Add Noise to Image with Mask", + "Color correction", + "Displace Image with Depth", + "Displace Images with Mask", + "Zoom Image with Depth" + ], + { + "title_aux": "ComfyUI Fictiverse Nodes" + } + ], + "https://github.com/FizzleDorf/ComfyUI-AIT": [ + [ + "AIT_Unet_Loader", + "AIT_VAE_Encode_Loader" + ], + { + "title_aux": "ComfyUI-AIT" + } + ], + "https://github.com/FizzleDorf/ComfyUI_FizzNodes": [ + [ + "AbsCosWave", + "AbsSinWave", + "BatchGLIGENSchedule", + "BatchPromptSchedule", + "BatchPromptScheduleEncodeSDXL", + "BatchPromptScheduleLatentInput", + "BatchPromptScheduleNodeFlowEnd", + "BatchPromptScheduleSDXLLatentInput", + "BatchStringSchedule", + "BatchValueSchedule", + "BatchValueScheduleLatentInput", + "CalculateFrameOffset", + "ConcatStringSingle", + "CosWave", + "FizzFrame", + "FizzFrameConcatenate", + "ImageBatchFromValueSchedule", + "Init FizzFrame", + "InvCosWave", + "InvSinWave", + "Lerp", + "PromptSchedule", + "PromptScheduleEncodeSDXL", + "PromptScheduleNodeFlow", + "PromptScheduleNodeFlowEnd", + "SawtoothWave", + "SinWave", + "SquareWave", + "StringConcatenate", + "StringSchedule", + "TriangleWave", + "ValueSchedule", + "convertKeyframeKeysToBatchKeys" + ], + { + "title_aux": "FizzNodes" + } + ], + "https://github.com/FlyingFireCo/tiled_ksampler": [ + [ + "Asymmetric Tiled KSampler", + "Circular VAEDecode", + "Tiled KSampler" + ], + { + "title_aux": "tiled_ksampler" + } + ], + "https://github.com/Franck-Demongin/NX_PromptStyler": [ + [ + "NX_PromptStyler" + ], + { + "title_aux": "NX_PromptStyler" + } + ], + "https://github.com/GMapeSplat/ComfyUI_ezXY": [ + [ + "ConcatenateString", + "ItemFromDropdown", + "IterationDriver", + "JoinImages", + "LineToConsole", + "NumberFromList", + "NumbersToList", + "PlotImages", + "StringFromList", + "StringToLabel", + "StringsToList", + "ezMath", + "ezXY_AssemblePlot", + "ezXY_Driver" + ], + { + "title_aux": "ezXY scripts and nodes" + } + ], + "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes": [ + [ + "Danbooru (ID)", + "Danbooru (Random)", + "Random File From Path", + "Replace Strings", + "Simple Wildcards", + "Simple Wildcards (Dir.)", + "Wildcards Nodes" + ], + { + "title_aux": "ComfyUI-GTSuya-Nodes" + } + ], + "https://github.com/GavChap/ComfyUI-CascadeResolutions": [ + [ + "CascadeResolutions" + ], + { + "title_aux": "ComfyUI-CascadeResolutions" + } + ], + "https://github.com/Gourieff/comfyui-reactor-node": [ + [ + "ReActorFaceSwap", + "ReActorLoadFaceModel", + "ReActorRestoreFace", + "ReActorSaveFaceModel" + ], + { + "title_aux": "ReActor Node for ComfyUI" + } + ], + "https://github.com/HAL41/ComfyUI-aichemy-nodes": [ + [ + "aichemyYOLOv8Segmentation" + ], + { + "title_aux": "ComfyUI aichemy nodes" + } + ], + "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream": [ + [ + "Moondream Interrogator (NO COMMERCIAL USE)" + ], + { + "title_aux": "ComfyUI-Hangover-Moondream" + } + ], + "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes": [ + [ + "Image Scale Bounding Box", + "MS kosmos-2 Interrogator", + "Make Inpaint Model", + "Save Image w/o Metadata" + ], + { + "title_aux": "ComfyUI-Hangover-Nodes" + } + ], + "https://github.com/Haoming02/comfyui-diffusion-cg": [ + [ + "Normalization", + "NormalizationXL", + "Recenter", + "Recenter XL" + ], + { + "title_aux": "ComfyUI Diffusion Color Grading" + } + ], + "https://github.com/Haoming02/comfyui-floodgate": [ + [ + "FloodGate" + ], + { + "title_aux": "ComfyUI Floodgate" + } + ], + "https://github.com/HaydenReeve/ComfyUI-Better-Strings": [ + [ + "BetterString" + ], + { + "title_aux": "ComfyUI Better Strings" + } + ], + "https://github.com/HebelHuber/comfyui-enhanced-save-node": [ + [ + "EnhancedSaveNode" + ], + { + "title_aux": "comfyui-enhanced-save-node" + } + ], + "https://github.com/Hiero207/ComfyUI-Hiero-Nodes": [ + [ + "Post to Discord w/ Webhook" + ], + { + "author": "Hiero", + "description": "Just some nodes that I wanted/needed, so I made them.", + "nickname": "HNodes", + "title": "Hiero-Nodes", + "title_aux": "ComfyUI-Hiero-Nodes" + } + ], + "https://github.com/IDGallagher/ComfyUI-IG-Nodes": [ + [ + "IG Analyze SSIM", + "IG Cross Fade Images", + "IG Explorer", + "IG Float", + "IG Folder", + "IG Int", + "IG Load Image", + "IG Load Images", + "IG Multiply", + "IG Path Join", + "IG String", + "IG ZFill" + ], + { + "author": "IDGallagher", + "description": "Custom nodes to aid in the exploration of Latent Space", + "nickname": "IG Interpolation Nodes", + "title": "IG Interpolation Nodes", + "title_aux": "IG Interpolation Nodes" + } + ], + "https://github.com/Inzaniak/comfyui-ranbooru": [ + [ + "PromptBackground", + "PromptLimit", + "PromptMix", + "PromptRandomWeight", + "PromptRemove", + "Ranbooru", + "RanbooruURL", + "RandomPicturePath" + ], + { + "title_aux": "Ranbooru for ComfyUI" + } + ], + "https://github.com/JPS-GER/ComfyUI_JPS-Nodes": [ + [ + "Conditioning Switch (JPS)", + "ControlNet Switch (JPS)", + "Crop Image Pipe (JPS)", + "Crop Image Settings (JPS)", + "Crop Image Square (JPS)", + "Crop Image TargetSize (JPS)", + "CtrlNet CannyEdge Pipe (JPS)", + "CtrlNet CannyEdge Settings (JPS)", + "CtrlNet MiDaS Pipe (JPS)", + "CtrlNet MiDaS Settings (JPS)", + "CtrlNet OpenPose Pipe (JPS)", + "CtrlNet OpenPose Settings (JPS)", + "CtrlNet ZoeDepth Pipe (JPS)", + "CtrlNet ZoeDepth Settings (JPS)", + "Disable Enable Switch (JPS)", + "Enable Disable Switch (JPS)", + "Generation TXT IMG Settings (JPS)", + "Get Date Time String (JPS)", + "Get Image Size (JPS)", + "IP Adapter Settings (JPS)", + "IP Adapter Settings Pipe (JPS)", + "IP Adapter Single Settings (JPS)", + "IP Adapter Single Settings Pipe (JPS)", + "IPA Switch (JPS)", + "Image Switch (JPS)", + "ImageToImage Pipe (JPS)", + "ImageToImage Settings (JPS)", + "Images Masks MultiPipe (JPS)", + "Integer Switch (JPS)", + "Largest Int (JPS)", + "Latent Switch (JPS)", + "Lora Loader (JPS)", + "Mask Switch (JPS)", + "Model Switch (JPS)", + "Multiply Float Float (JPS)", + "Multiply Int Float (JPS)", + "Multiply Int Int (JPS)", + "Resolution Multiply (JPS)", + "Revision Settings (JPS)", + "Revision Settings Pipe (JPS)", + "SDXL Basic Settings (JPS)", + "SDXL Basic Settings Pipe (JPS)", + "SDXL Fundamentals MultiPipe (JPS)", + "SDXL Prompt Handling (JPS)", + "SDXL Prompt Handling Plus (JPS)", + "SDXL Prompt Styler (JPS)", + "SDXL Recommended Resolution Calc (JPS)", + "SDXL Resolutions (JPS)", + "Sampler Scheduler Settings (JPS)", + "Save Images Plus (JPS)", + "Substract Int Int (JPS)", + "Text Concatenate (JPS)", + "Text Prompt (JPS)", + "VAE Switch (JPS)" + ], + { + "author": "JPS", + "description": "Various nodes to handle SDXL Resolutions, SDXL Basic Settings, IP Adapter Settings, Revision Settings, SDXL Prompt Styler, Crop Image to Square, Crop Image to Target Size, Get Date-Time String, Resolution Multiply, Largest Integer, 5-to-1 Switches for Integer, Images, Latents, Conditioning, Model, VAE, ControlNet", + "nickname": "JPS Custom Nodes", + "title": "JPS Custom Nodes for ComfyUI", + "title_aux": "JPS Custom Nodes for ComfyUI" + } + ], + "https://github.com/JaredTherriault/ComfyUI-JNodes": [ + [ + "JNodes_AddOrSetMetaDataKey", + "JNodes_AnyToString", + "JNodes_AppendReversedFrames", + "JNodes_BooleanSelectorWithString", + "JNodes_CheckpointSelectorWithString", + "JNodes_GetOutputDirectory", + "JNodes_GetParameterFromList", + "JNodes_GetParameterGlobal", + "JNodes_GetTempDirectory", + "JNodes_ImageFormatSelector", + "JNodes_ImageSizeSelector", + "JNodes_LoadVideo", + "JNodes_LoraExtractor", + "JNodes_OutVideoInfo", + "JNodes_ParseDynamicPrompts", + "JNodes_ParseParametersToGlobalList", + "JNodes_ParseWildcards", + "JNodes_PromptBuilderSingleSubject", + "JNodes_RemoveCommentedText", + "JNodes_RemoveMetaDataKey", + "JNodes_RemoveParseableDataForInference", + "JNodes_SamplerSelectorWithString", + "JNodes_SaveImageWithOutput", + "JNodes_SaveVideo", + "JNodes_SchedulerSelectorWithString", + "JNodes_SearchAndReplace", + "JNodes_SearchAndReplaceFromFile", + "JNodes_SearchAndReplaceFromList", + "JNodes_SetNegativePromptInMetaData", + "JNodes_SetPositivePromptInMetaData", + "JNodes_SplitAndJoin", + "JNodes_StringLiteral", + "JNodes_SyncedStringLiteral", + "JNodes_TokenCounter", + "JNodes_TrimAndStrip", + "JNodes_UploadVideo", + "JNodes_VaeSelectorWithString" + ], + { + "title_aux": "ComfyUI-JNodes" + } + ], + "https://github.com/JcandZero/ComfyUI_GLM4Node": [ + [ + "GLM3_turbo_CHAT", + "GLM4_CHAT", + "GLM4_Vsion_IMGURL" + ], + { + "title_aux": "ComfyUI_GLM4Node" + } + ], + "https://github.com/Jcd1230/rembg-comfyui-node": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Rembg Background Removal Node for ComfyUI" + } + ], + "https://github.com/JerryOrbachJr/ComfyUI-RandomSize": [ + [ + "JOJR_RandomSize" + ], + { + "author": "JerryOrbachJr", + "description": "A ComfyUI custom node that randomly selects a height and width pair from a list in a config file", + "nickname": "Random Size", + "title": "Random Size", + "title_aux": "ComfyUI-RandomSize" + } + ], + "https://github.com/Jordach/comfy-plasma": [ + [ + "JDC_AutoContrast", + "JDC_BlendImages", + "JDC_BrownNoise", + "JDC_Contrast", + "JDC_EqualizeGrey", + "JDC_GaussianBlur", + "JDC_GreyNoise", + "JDC_Greyscale", + "JDC_ImageLoader", + "JDC_ImageLoaderMeta", + "JDC_PinkNoise", + "JDC_Plasma", + "JDC_PlasmaSampler", + "JDC_PowerImage", + "JDC_RandNoise", + "JDC_ResizeFactor" + ], + { + "title_aux": "comfy-plasma" + } + ], + "https://github.com/Kaharos94/ComfyUI-Saveaswebp": [ + [ + "Save_as_webp" + ], + { + "title_aux": "ComfyUI-Saveaswebp" + } + ], + "https://github.com/Kangkang625/ComfyUI-paint-by-example": [ + [ + "PaintbyExamplePipeLoader", + "PaintbyExampleSampler" + ], + { + "title_aux": "ComfyUI-Paint-by-Example" + } + ], + "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet": [ + [ + "ACN_AdvancedControlNetApply", + "ACN_ControlNetLoaderWithLoraAdvanced", + "ACN_DefaultUniversalWeights", + "ACN_SparseCtrlIndexMethodNode", + "ACN_SparseCtrlLoaderAdvanced", + "ACN_SparseCtrlMergedLoaderAdvanced", + "ACN_SparseCtrlRGBPreprocessor", + "ACN_SparseCtrlSpreadMethodNode", + "ControlNetLoaderAdvanced", + "CustomControlNetWeights", + "CustomT2IAdapterWeights", + "DiffControlNetLoaderAdvanced", + "LatentKeyframe", + "LatentKeyframeBatchedGroup", + "LatentKeyframeGroup", + "LatentKeyframeTiming", + "LoadImagesFromDirectory", + "ScaledSoftControlNetWeights", + "ScaledSoftMaskedUniversalWeights", + "SoftControlNetWeights", + "SoftT2IAdapterWeights", + "TimestepKeyframe" + ], + { + "title_aux": "ComfyUI-Advanced-ControlNet" + } + ], + "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved": [ + [ + "ADE_AdjustPEFullStretch", + "ADE_AdjustPEManual", + "ADE_AdjustPESweetspotStretch", + "ADE_AnimateDiffCombine", + "ADE_AnimateDiffKeyframe", + "ADE_AnimateDiffLoRALoader", + "ADE_AnimateDiffLoaderGen1", + "ADE_AnimateDiffLoaderV1Advanced", + "ADE_AnimateDiffLoaderWithContext", + "ADE_AnimateDiffModelSettings", + "ADE_AnimateDiffModelSettingsAdvancedAttnStrengths", + "ADE_AnimateDiffModelSettingsSimple", + "ADE_AnimateDiffModelSettings_Release", + "ADE_AnimateDiffSamplingSettings", + "ADE_AnimateDiffSettings", + "ADE_AnimateDiffUniformContextOptions", + "ADE_AnimateDiffUnload", + "ADE_ApplyAnimateDiffModel", + "ADE_ApplyAnimateDiffModelSimple", + "ADE_BatchedContextOptions", + "ADE_CustomCFG", + "ADE_CustomCFGKeyframe", + "ADE_EmptyLatentImageLarge", + "ADE_IterationOptsDefault", + "ADE_IterationOptsFreeInit", + "ADE_LoadAnimateDiffModel", + "ADE_LoopedUniformContextOptions", + "ADE_LoopedUniformViewOptions", + "ADE_MaskedLoadLora", + "ADE_MultivalDynamic", + "ADE_MultivalScaledMask", + "ADE_NoiseLayerAdd", + "ADE_NoiseLayerAddWeighted", + "ADE_NoiseLayerReplace", + "ADE_RawSigmaSchedule", + "ADE_SigmaSchedule", + "ADE_SigmaScheduleSplitAndCombine", + "ADE_SigmaScheduleWeightedAverage", + "ADE_SigmaScheduleWeightedAverageInterp", + "ADE_StandardStaticContextOptions", + "ADE_StandardStaticViewOptions", + "ADE_StandardUniformContextOptions", + "ADE_StandardUniformViewOptions", + "ADE_UseEvolvedSampling", + "ADE_ViewsOnlyContextOptions", + "AnimateDiffLoaderV1", + "CheckpointLoaderSimpleWithNoiseSelect" + ], + { + "title_aux": "AnimateDiff Evolved" + } + ], + "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite": [ + [ + "VHS_BatchManager", + "VHS_DuplicateImages", + "VHS_DuplicateLatents", + "VHS_DuplicateMasks", + "VHS_GetImageCount", + "VHS_GetLatentCount", + "VHS_GetMaskCount", + "VHS_LoadAudio", + "VHS_LoadImages", + "VHS_LoadImagesPath", + "VHS_LoadVideo", + "VHS_LoadVideoPath", + "VHS_MergeImages", + "VHS_MergeLatents", + "VHS_MergeMasks", + "VHS_PruneOutputs", + "VHS_SelectEveryNthImage", + "VHS_SelectEveryNthLatent", + "VHS_SelectEveryNthMask", + "VHS_SplitImages", + "VHS_SplitLatents", + "VHS_SplitMasks", + "VHS_VAEDecodeBatched", + "VHS_VAEEncodeBatched", + "VHS_VideoCombine" + ], + { + "title_aux": "ComfyUI-VideoHelperSuite" + } + ], + "https://github.com/LEv145/images-grid-comfy-plugin": [ + [ + "GridAnnotation", + "ImageCombine", + "ImagesGridByColumns", + "ImagesGridByRows", + "LatentCombine" + ], + { + "title_aux": "ImagesGrid" + } + ], + "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI": [ + [ + "LoRA Caption Load", + "LoRA Caption Save" + ], + { + "title_aux": "Image-Captioning-in-ComfyUI" + } + ], + "https://github.com/LarryJane491/Lora-Training-in-Comfy": [ + [ + "Lora Training in Comfy (Advanced)", + "Lora Training in ComfyUI", + "Tensorboard Access" + ], + { + "title_aux": "Lora-Training-in-Comfy" + } + ], + "https://github.com/Layer-norm/comfyui-lama-remover": [ + [ + "LamaRemover", + "LamaRemoverIMG" + ], + { + "title_aux": "Comfyui lama remover" + } + ], + "https://github.com/Lerc/canvas_tab": [ + [ + "Canvas_Tab", + "Send_To_Editor" + ], + { + "author": "Lerc", + "description": "This extension provides a full page image editor with mask support. There are two nodes, one to receive images from the editor and one to send images to the editor.", + "nickname": "Canvas Tab", + "title": "Canvas Tab", + "title_aux": "Canvas Tab" + } + ], + "https://github.com/Limitex/ComfyUI-Calculation": [ + [ + "CenterCalculation", + "CreateQRCode" + ], + { + "title_aux": "ComfyUI-Calculation" + } + ], + "https://github.com/Limitex/ComfyUI-Diffusers": [ + [ + "CreateIntListNode", + "DiffusersClipTextEncode", + "DiffusersModelMakeup", + "DiffusersPipelineLoader", + "DiffusersSampler", + "DiffusersSchedulerLoader", + "DiffusersVaeLoader", + "LcmLoraLoader", + "StreamDiffusionCreateStream", + "StreamDiffusionFastSampler", + "StreamDiffusionSampler", + "StreamDiffusionWarmup" + ], + { + "title_aux": "ComfyUI-Diffusers" + } + ], + "https://github.com/Loewen-Hob/rembg-comfyui-node-better": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Rembg Background Removal Node for ComfyUI" + } + ], + "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame": [ + [ + "BreakFrames", + "BreakGrid", + "GetKeyFrames", + "MakeGrid", + "RandomImageFromDir" + ], + { + "title_aux": "ComfyBreakAnim" + } + ], + "https://github.com/LonicaMewinsky/ComfyUI-RawSaver": [ + [ + "SaveTifImage" + ], + { + "title_aux": "ComfyUI-RawSaver" + } + ], + "https://github.com/LyazS/comfyui-anime-seg": [ + [ + "Anime Character Seg" + ], + { + "title_aux": "Anime Character Segmentation node for comfyui" + } + ], + "https://github.com/M1kep/ComfyLiterals": [ + [ + "Checkpoint", + "Float", + "Int", + "KepStringLiteral", + "Lora", + "Operation", + "String" + ], + { + "title_aux": "ComfyLiterals" + } + ], + "https://github.com/M1kep/ComfyUI-KepOpenAI": [ + [ + "KepOpenAI_ImageWithPrompt" + ], + { + "title_aux": "ComfyUI-KepOpenAI" + } + ], + "https://github.com/M1kep/ComfyUI-OtherVAEs": [ + [ + "OtherVAE_Taesd" + ], + { + "title_aux": "ComfyUI-OtherVAEs" + } + ], + "https://github.com/M1kep/Comfy_KepKitchenSink": [ + [ + "KepRotateImage" + ], + { + "title_aux": "Comfy_KepKitchenSink" + } + ], + "https://github.com/M1kep/Comfy_KepListStuff": [ + [ + "Empty Images", + "Image Overlay", + "ImageListLoader", + "Join Float Lists", + "Join Image Lists", + "KepStringList", + "KepStringListFromNewline", + "Kep_JoinListAny", + "Kep_RepeatList", + "Kep_ReverseList", + "Kep_VariableImageBuilder", + "List Length", + "Range(Num Steps) - Float", + "Range(Num Steps) - Int", + "Range(Step) - Float", + "Range(Step) - Int", + "Stack Images", + "XYAny", + "XYImage" + ], + { + "title_aux": "Comfy_KepListStuff" + } + ], + "https://github.com/M1kep/Comfy_KepMatteAnything": [ + [ + "MatteAnything_DinoBoxes", + "MatteAnything_GenerateVITMatte", + "MatteAnything_InitSamPredictor", + "MatteAnything_LoadDINO", + "MatteAnything_LoadVITMatteModel", + "MatteAnything_SAMLoader", + "MatteAnything_SAMMaskFromBoxes", + "MatteAnything_ToTrimap" + ], + { + "title_aux": "Comfy_KepMatteAnything" + } + ], + "https://github.com/M1kep/KepPromptLang": [ + [ + "Build Gif", + "Special CLIP Loader" + ], + { + "title_aux": "KepPromptLang" + } + ], + "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes": [ + [ + "Save Text File_mne" + ], + { + "title_aux": "ComfyUI-mnemic-nodes" + } + ], + "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Batch Rembg for ComfyUI" + } + ], + "https://github.com/ManglerFTW/ComfyI2I": [ + [ + "Color Transfer", + "Combine and Paste", + "Inpaint Segments", + "Mask Ops" + ], + { + "author": "ManglerFTW", + "title": "ComfyI2I", + "title_aux": "ComfyI2I" + } + ], + "https://github.com/MarkoCa1/ComfyUI_Segment_Mask": [ + [ + "AutomaticMask(segment anything)" + ], + { + "title_aux": "ComfyUI_Segment_Mask" + } + ], + "https://github.com/Miosp/ComfyUI-FBCNN": [ + [ + "JPEG artifacts removal FBCNN" + ], + { + "title_aux": "ComfyUI-FBCNN" + } + ], + "https://github.com/MitoshiroPJ/comfyui_slothful_attention": [ + [ + "NearSightedAttention", + "NearSightedAttentionSimple", + "NearSightedTile", + "SlothfulAttention" + ], + { + "title_aux": "ComfyUI Slothful Attention" + } + ], + "https://github.com/MrForExample/ComfyUI-3D-Pack": [ + [], + { + "nodename_pattern": "^\\[Comfy3D\\]", + "title_aux": "ComfyUI-3D-Pack" + } + ], + "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved": [ + [], + { + "nodename_pattern": "^\\[AnimateAnyone\\]", + "title_aux": "ComfyUI-AnimateAnyone-Evolved" + } + ], + "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite": [ + [ + "LatentTravel" + ], + { + "title_aux": "ComfyUI_TravelSuite" + } + ], + "https://github.com/NimaNzrii/comfyui-photoshop": [ + [ + "PhotoshopToComfyUI" + ], + { + "title_aux": "comfyui-photoshop" + } + ], + "https://github.com/NimaNzrii/comfyui-popup_preview": [ + [ + "PreviewPopup" + ], + { + "title_aux": "comfyui-popup_preview" + } + ], + "https://github.com/Niutonian/ComfyUi-NoodleWebcam": [ + [ + "WebcamNode" + ], + { + "title_aux": "ComfyUi-NoodleWebcam" + } + ], + "https://github.com/Nlar/ComfyUI_CartoonSegmentation": [ + [ + "AnimeSegmentation", + "KenBurnsConfigLoader", + "KenBurns_Processor", + "LoadImageFilename" + ], + { + "author": "Nels Larsen", + "description": "This extension offers a front end to the Cartoon Segmentation Project (https://github.com/CartoonSegmentation/CartoonSegmentation)", + "nickname": "CfyCS", + "title": "ComfyUI_CartoonSegmentation", + "title_aux": "ComfyUI_CartoonSegmentation" + } + ], + "https://github.com/NotHarroweD/Harronode": [ + [ + "Harronode" + ], + { + "author": "HarroweD and quadmoon (https://github.com/traugdor)", + "description": "This extension to ComfyUI will build a prompt for the Harrlogos LoRA for SDXL.", + "nickname": "Harronode", + "nodename_pattern": "Harronode", + "title": "Harrlogos Prompt Builder Node", + "title_aux": "Harronode" + } + ], + "https://github.com/Nourepide/ComfyUI-Allor": [ + [ + "AlphaChanelAdd", + "AlphaChanelAddByMask", + "AlphaChanelAsMask", + "AlphaChanelRemove", + "AlphaChanelRestore", + "ClipClamp", + "ClipVisionClamp", + "ClipVisionOutputClamp", + "ConditioningClamp", + "ControlNetClamp", + "GligenClamp", + "ImageBatchCopy", + "ImageBatchFork", + "ImageBatchGet", + "ImageBatchJoin", + "ImageBatchPermute", + "ImageBatchRemove", + "ImageClamp", + "ImageCompositeAbsolute", + "ImageCompositeAbsoluteByContainer", + "ImageCompositeRelative", + "ImageCompositeRelativeByContainer", + "ImageContainer", + "ImageContainerInheritanceAdd", + "ImageContainerInheritanceMax", + "ImageContainerInheritanceScale", + "ImageContainerInheritanceSum", + "ImageDrawArc", + "ImageDrawArcByContainer", + "ImageDrawChord", + "ImageDrawChordByContainer", + "ImageDrawEllipse", + "ImageDrawEllipseByContainer", + "ImageDrawLine", + "ImageDrawLineByContainer", + "ImageDrawPieslice", + "ImageDrawPiesliceByContainer", + "ImageDrawPolygon", + "ImageDrawRectangle", + "ImageDrawRectangleByContainer", + "ImageDrawRectangleRounded", + "ImageDrawRectangleRoundedByContainer", + "ImageEffectsAdjustment", + "ImageEffectsGrayscale", + "ImageEffectsLensBokeh", + "ImageEffectsLensChromaticAberration", + "ImageEffectsLensOpticAxis", + "ImageEffectsLensVignette", + "ImageEffectsLensZoomBurst", + "ImageEffectsNegative", + "ImageEffectsSepia", + "ImageFilterBilateralBlur", + "ImageFilterBlur", + "ImageFilterBoxBlur", + "ImageFilterContour", + "ImageFilterDetail", + "ImageFilterEdgeEnhance", + "ImageFilterEdgeEnhanceMore", + "ImageFilterEmboss", + "ImageFilterFindEdges", + "ImageFilterGaussianBlur", + "ImageFilterGaussianBlurAdvanced", + "ImageFilterMax", + "ImageFilterMedianBlur", + "ImageFilterMin", + "ImageFilterMode", + "ImageFilterRank", + "ImageFilterSharpen", + "ImageFilterSmooth", + "ImageFilterSmoothMore", + "ImageFilterStackBlur", + "ImageNoiseBeta", + "ImageNoiseBinomial", + "ImageNoiseBytes", + "ImageNoiseGaussian", + "ImageSegmentation", + "ImageSegmentationCustom", + "ImageSegmentationCustomAdvanced", + "ImageText", + "ImageTextMultiline", + "ImageTextMultilineOutlined", + "ImageTextOutlined", + "ImageTransformCropAbsolute", + "ImageTransformCropCorners", + "ImageTransformCropRelative", + "ImageTransformPaddingAbsolute", + "ImageTransformPaddingRelative", + "ImageTransformResizeAbsolute", + "ImageTransformResizeClip", + "ImageTransformResizeRelative", + "ImageTransformRotate", + "ImageTransformTranspose", + "LatentClamp", + "MaskClamp", + "ModelClamp", + "StyleModelClamp", + "UpscaleModelClamp", + "VaeClamp" + ], + { + "title_aux": "Allor Plugin" + } + ], + "https://github.com/Nuked88/ComfyUI-N-Nodes": [ + [ + "CLIPTextEncodeAdvancedNSuite [n-suite]", + "DynamicPrompt [n-suite]", + "Float Variable [n-suite]", + "FrameInterpolator [n-suite]", + "GPT Loader Simple [n-suite]", + "GPT Sampler [n-suite]", + "ImagePadForOutpaintAdvanced [n-suite]", + "Integer Variable [n-suite]", + "Llava Clip Loader [n-suite]", + "LoadFramesFromFolder [n-suite]", + "LoadVideo [n-suite]", + "SaveVideo [n-suite]", + "SetMetadataForSaveVideo [n-suite]", + "String Variable [n-suite]" + ], + { + "title_aux": "ComfyUI-N-Nodes" + } + ], + "https://github.com/Off-Live/ComfyUI-off-suite": [ + [ + "Apply CLAHE", + "Cached Image Load From URL", + "Crop Center wigh SEGS", + "Crop Center with SEGS", + "Dilate Mask for Each Face", + "GW Number Formatting", + "Image Crop Fit", + "Image Resize Fit", + "OFF SEGS to Image", + "Paste Face Segment to Image", + "Query Gender and Age", + "SEGS to Face Crop Data", + "Safe Mask to Image", + "VAE Encode For Inpaint V2", + "Watermarking" + ], + { + "title_aux": "ComfyUI-off-suite" + } + ], + "https://github.com/Onierous/QRNG_Node_ComfyUI/raw/main/qrng_node.py": [ + [ + "QRNG_Node_CSV" + ], + { + "title_aux": "QRNG_Node_ComfyUI" + } + ], + "https://github.com/PCMonsterx/ComfyUI-CSV-Loader": [ + [ + "Load Artists CSV", + "Load Artmovements CSV", + "Load Characters CSV", + "Load Colors CSV", + "Load Composition CSV", + "Load Lighting CSV", + "Load Negative CSV", + "Load Positive CSV", + "Load Settings CSV", + "Load Styles CSV" + ], + { + "title_aux": "ComfyUI-CSV-Loader" + } + ], + "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts": [ + [ + "CSVPromptsLoader", + "CombinePrompt", + "MultiLoraLoader", + "RandomPrompt" + ], + { + "title_aux": "ComfyUI-Malefish-Custom-Scripts" + } + ], + "https://github.com/Pfaeff/pfaeff-comfyui": [ + [ + "AstropulsePixelDetector", + "BackgroundRemover", + "ImagePadForBetterOutpaint", + "Inpainting", + "InpaintingPipelineLoader" + ], + { + "title_aux": "pfaeff-comfyui" + } + ], + "https://github.com/QaisMalkawi/ComfyUI-QaisHelper": [ + [ + "Bool Binary Operation", + "Bool Unary Operation", + "Item Debugger", + "Item Switch", + "Nearest SDXL Resolution", + "SDXL Resolution", + "Size Swapper" + ], + { + "title_aux": "ComfyUI-Qais-Helper" + } + ], + "https://github.com/RenderRift/ComfyUI-RenderRiftNodes": [ + [ + "AnalyseMetadata", + "DateIntegerNode", + "DisplayMetaOptions", + "LoadImageWithMeta", + "MetadataOverlayNode", + "VideoPathMetaExtraction" + ], + { + "title_aux": "ComfyUI-RenderRiftNodes" + } + ], + "https://github.com/Ryuukeisyou/comfyui_face_parsing": [ + [ + "BBoxListItemSelect(FaceParsing)", + "BBoxResize(FaceParsing)", + "ColorAdjust(FaceParsing)", + "FaceBBoxDetect(FaceParsing)", + "FaceBBoxDetectorLoader(FaceParsing)", + "FaceParse(FaceParsing)", + "FaceParsingModelLoader(FaceParsing)", + "FaceParsingProcessorLoader(FaceParsing)", + "FaceParsingResultsParser(FaceParsing)", + "GuidedFilter(FaceParsing)", + "ImageCropWithBBox(FaceParsing)", + "ImageInsertWithBBox(FaceParsing)", + "ImageListSelect(FaceParsing)", + "ImagePadWithBBox(FaceParsing)", + "ImageResizeCalculator(FaceParsing)", + "ImageResizeWithBBox(FaceParsing)", + "ImageSize(FaceParsing)", + "LatentCropWithBBox(FaceParsing)", + "LatentInsertWithBBox(FaceParsing)", + "LatentSize(FaceParsing)", + "MaskComposite(FaceParsing)", + "MaskListComposite(FaceParsing)", + "MaskListSelect(FaceParsing)", + "MaskToBBox(FaceParsing)", + "SkinDetectTraditional(FaceParsing)" + ], + { + "title_aux": "comfyui_face_parsing" + } + ], + "https://github.com/Ryuukeisyou/comfyui_image_io_helpers": [ + [ + "ImageLoadAsMaskByPath(ImageIOHelpers)", + "ImageLoadByPath(ImageIOHelpers)", + "ImageLoadFromBase64(ImageIOHelpers)", + "ImageSaveAsBase64(ImageIOHelpers)", + "ImageSaveToPath(ImageIOHelpers)" + ], + { + "title_aux": "comfyui_image_io_helpers" + } + ], + "https://github.com/SLAPaper/ComfyUI-Image-Selector": [ + [ + "ImageDuplicator", + "ImageSelector", + "LatentDuplicator", + "LatentSelector" + ], + { + "title_aux": "ComfyUI-Image-Selector" + } + ], + "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes": [ + [ + "MSSqlSelectNode", + "MSSqlTableNode" + ], + { + "title_aux": "LexMSDBNodes" + } + ], + "https://github.com/SOELexicon/ComfyUI-LexTools": [ + [ + "AesthetlcScoreSorter", + "AgeClassifierNode", + "ArtOrHumanClassifierNode", + "CalculateAestheticScore", + "DocumentClassificationNode", + "FoodCategoryClassifierNode", + "ImageAspectPadNode", + "ImageCaptioning", + "ImageFilterByFloatScoreNode", + "ImageFilterByIntScoreNode", + "ImageQualityScoreNode", + "ImageRankingNode", + "ImageScaleToMin", + "LoadAesteticModel", + "MD5ImageHashNode", + "SamplerPropertiesNode", + "ScoreConverterNode", + "SeedIncrementerNode", + "SegformerNode", + "SegformerNodeMasks", + "SegformerNodeMergeSegments", + "StepCfgIncrementNode" + ], + { + "title_aux": "ComfyUI-LexTools" + } + ], + "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI/raw/master/custom_nodes/clip_text_encoder_a1111.py": [ + [ + "CLIPTextEncodeA1111", + "RerouteTextForCLIPTextEncodeA1111" + ], + { + "title_aux": "ComfyUI A1111-like Prompt Custom Node Solution" + } + ], + "https://github.com/Scholar01/ComfyUI-Keyframe": [ + [ + "KeyframeApply", + "KeyframeInterpolationPart", + "KeyframePart" + ], + { + "title_aux": "SComfyUI-Keyframe" + } + ], + "https://github.com/SeargeDP/SeargeSDXL": [ + [ + "SeargeAdvancedParameters", + "SeargeCheckpointLoader", + "SeargeConditionMixing", + "SeargeConditioningMuxer2", + "SeargeConditioningMuxer5", + "SeargeConditioningParameters", + "SeargeControlnetAdapterV2", + "SeargeControlnetModels", + "SeargeCustomAfterUpscaling", + "SeargeCustomAfterVaeDecode", + "SeargeCustomPromptMode", + "SeargeDebugPrinter", + "SeargeEnablerInputs", + "SeargeFloatConstant", + "SeargeFloatMath", + "SeargeFloatPair", + "SeargeFreeU", + "SeargeGenerated1", + "SeargeGenerationParameters", + "SeargeHighResolution", + "SeargeImage2ImageAndInpainting", + "SeargeImageAdapterV2", + "SeargeImageSave", + "SeargeImageSaving", + "SeargeInput1", + "SeargeInput2", + "SeargeInput3", + "SeargeInput4", + "SeargeInput5", + "SeargeInput6", + "SeargeInput7", + "SeargeIntegerConstant", + "SeargeIntegerMath", + "SeargeIntegerPair", + "SeargeIntegerScaler", + "SeargeLatentMuxer3", + "SeargeLoraLoader", + "SeargeLoras", + "SeargeMagicBox", + "SeargeModelSelector", + "SeargeOperatingMode", + "SeargeOutput1", + "SeargeOutput2", + "SeargeOutput3", + "SeargeOutput4", + "SeargeOutput5", + "SeargeOutput6", + "SeargeOutput7", + "SeargeParameterProcessor", + "SeargePipelineStart", + "SeargePipelineTerminator", + "SeargePreviewImage", + "SeargePromptAdapterV2", + "SeargePromptCombiner", + "SeargePromptStyles", + "SeargePromptText", + "SeargeSDXLBasePromptEncoder", + "SeargeSDXLImage2ImageSampler", + "SeargeSDXLImage2ImageSampler2", + "SeargeSDXLPromptEncoder", + "SeargeSDXLRefinerPromptEncoder", + "SeargeSDXLSampler", + "SeargeSDXLSampler2", + "SeargeSDXLSamplerV3", + "SeargeSamplerAdvanced", + "SeargeSamplerInputs", + "SeargeSaveFolderInputs", + "SeargeSeparator", + "SeargeStylePreprocessor", + "SeargeTextInputV2", + "SeargeUpscaleModelLoader", + "SeargeUpscaleModels", + "SeargeVAELoader" + ], + { + "title_aux": "SeargeSDXL" + } + ], + "https://github.com/Ser-Hilary/SDXL_sizing/raw/main/conditioning_sizing_for_SDXL.py": [ + [ + "get_aspect_from_image", + "get_aspect_from_ints", + "sizing_node", + "sizing_node_basic", + "sizing_node_unparsed" + ], + { + "title_aux": "SDXL_sizing" + } + ], + "https://github.com/ShmuelRonen/ComfyUI-SVDResizer": [ + [ + "SVDRsizer" + ], + { + "title_aux": "ComfyUI-SVDResizer" + } + ], + "https://github.com/Shraknard/ComfyUI-Remover": [ + [ + "Remover" + ], + { + "title_aux": "ComfyUI-Remover" + } + ], + "https://github.com/Siberpone/lazy-pony-prompter": [ + [ + "LPP_Deleter", + "LPP_Derpibooru", + "LPP_E621", + "LPP_Loader_Derpibooru", + "LPP_Loader_E621", + "LPP_Saver" + ], + { + "title_aux": "Lazy Pony Prompter" + } + ], + "https://github.com/Smuzzies/comfyui_chatbox_overlay/raw/main/chatbox_overlay.py": [ + [ + "Chatbox Overlay" + ], + { + "title_aux": "Chatbox Overlay node for ComfyUI" + } + ], + "https://github.com/SoftMeng/ComfyUI_Mexx_Poster": [ + [ + "ComfyUI_Mexx_Poster" + ], + { + "title_aux": "ComfyUI_Mexx_Poster" + } + ], + "https://github.com/SoftMeng/ComfyUI_Mexx_Styler": [ + [ + "MexxSDXLPromptStyler", + "MexxSDXLPromptStylerAdvanced" + ], + { + "title_aux": "ComfyUI_Mexx_Styler" + } + ], + "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid": [ + [ + "SVD_txt2vid_ConditioningwithLatent" + ], + { + "title_aux": "Text to video for Stable Video Diffusion in ComfyUI" + } + ], + "https://github.com/Stability-AI/stability-ComfyUI-nodes": [ + [ + "ColorBlend", + "ControlLoraSave", + "GetImageSize" + ], + { + "title_aux": "stability-ComfyUI-nodes" + } + ], + "https://github.com/StartHua/ComfyUI_Seg_VITON": [ + [ + "segformer_agnostic", + "segformer_clothes", + "segformer_remove_bg", + "stabel_vition" + ], + { + "title_aux": "ComfyUI_Seg_VITON" + } + ], + "https://github.com/StartHua/Comfyui_joytag": [ + [ + "CXH_JoyTag" + ], + { + "title_aux": "Comfyui_joytag" + } + ], + "https://github.com/StartHua/Comfyui_segformer_b2_clothes": [ + [ + "segformer_b2_clothes" + ], + { + "title_aux": "comfyui_segformer_b2_clothes" + } + ], + "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes": [ + [ + "CR 8 Channel In", + "CR 8 Channel Out", + "CR Apply ControlNet", + "CR Apply LoRA Stack", + "CR Apply Model Merge", + "CR Apply Multi Upscale", + "CR Apply Multi-ControlNet", + "CR Arabic Text RTL", + "CR Aspect Ratio", + "CR Aspect Ratio Banners", + "CR Aspect Ratio SDXL", + "CR Aspect Ratio Social Media", + "CR Batch Images From List", + "CR Batch Process Switch", + "CR Binary Pattern", + "CR Binary To Bit List", + "CR Bit Schedule", + "CR Central Schedule", + "CR Checker Pattern", + "CR Clamp Value", + "CR Clip Input Switch", + "CR Color Bars", + "CR Color Gradient", + "CR Color Panel", + "CR Color Tint", + "CR Combine Prompt", + "CR Combine Schedules", + "CR Comic Panel Templates", + "CR Composite Text", + "CR Conditioning Input Switch", + "CR Conditioning Mixer", + "CR ControlNet Input Switch", + "CR Current Frame", + "CR Cycle Images", + "CR Cycle Images Simple", + "CR Cycle LoRAs", + "CR Cycle Models", + "CR Cycle Text", + "CR Cycle Text Simple", + "CR Data Bus In", + "CR Data Bus Out", + "CR Debatch Frames", + "CR Diamond Panel", + "CR Draw Perspective Text", + "CR Draw Pie", + "CR Draw Shape", + "CR Draw Text", + "CR Encode Scheduled Prompts", + "CR Feathered Border", + "CR Float Range List", + "CR Float To Integer", + "CR Float To String", + "CR Font File List", + "CR Get Parameter From Prompt", + "CR Gradient Float", + "CR Gradient Integer", + "CR Half Drop Panel", + "CR Halftone Filter", + "CR Halftone Grid", + "CR Hires Fix Process Switch", + "CR Image Border", + "CR Image Grid Panel", + "CR Image Input Switch", + "CR Image Input Switch (4 way)", + "CR Image List", + "CR Image List Simple", + "CR Image Output", + "CR Image Panel", + "CR Image Pipe Edit", + "CR Image Pipe In", + "CR Image Pipe Out", + "CR Image Size", + "CR Img2Img Process Switch", + "CR Increment Float", + "CR Increment Integer", + "CR Index", + "CR Index Increment", + "CR Index Multiply", + "CR Index Reset", + "CR Input Text List", + "CR Integer Multiple", + "CR Integer Range List", + "CR Integer To String", + "CR Interpolate Latents", + "CR Intertwine Lists", + "CR Keyframe List", + "CR Latent Batch Size", + "CR Latent Input Switch", + "CR LoRA List", + "CR LoRA Stack", + "CR Load Animation Frames", + "CR Load Flow Frames", + "CR Load GIF As List", + "CR Load Image List", + "CR Load Image List Plus", + "CR Load LoRA", + "CR Load Prompt Style", + "CR Load Schedule From File", + "CR Load Scheduled ControlNets", + "CR Load Scheduled LoRAs", + "CR Load Scheduled Models", + "CR Load Text List", + "CR Mask Text", + "CR Math Operation", + "CR Model Input Switch", + "CR Model List", + "CR Model Merge Stack", + "CR Module Input", + "CR Module Output", + "CR Module Pipe Loader", + "CR Multi Upscale Stack", + "CR Multi-ControlNet Stack", + "CR Multiline Text", + "CR Output Flow Frames", + "CR Output Schedule To File", + "CR Overlay Text", + "CR Overlay Transparent Image", + "CR Page Layout", + "CR Pipe Switch", + "CR Polygons", + "CR Prompt List", + "CR Prompt List Keyframes", + "CR Prompt Scheduler", + "CR Prompt Text", + "CR Radial Gradient", + "CR Random Hex Color", + "CR Random LoRA Stack", + "CR Random Multiline Colors", + "CR Random Multiline Values", + "CR Random Panel Codes", + "CR Random RGB", + "CR Random RGB Gradient", + "CR Random Shape Pattern", + "CR Random Weight LoRA", + "CR Repeater", + "CR SD1.5 Aspect Ratio", + "CR SDXL Aspect Ratio", + "CR SDXL Base Prompt Encoder", + "CR SDXL Prompt Mix Presets", + "CR SDXL Prompt Mixer", + "CR SDXL Style Text", + "CR Save Text To File", + "CR Schedule Input Switch", + "CR Schedule To ScheduleList", + "CR Seamless Checker", + "CR Seed", + "CR Seed to Int", + "CR Select Font", + "CR Select ISO Size", + "CR Select Model", + "CR Select Resize Method", + "CR Set Switch From String", + "CR Set Value On Binary", + "CR Set Value On Boolean", + "CR Set Value on String", + "CR Simple Banner", + "CR Simple Binary Pattern", + "CR Simple Binary Pattern Simple", + "CR Simple Image Compare", + "CR Simple List", + "CR Simple Meme Template", + "CR Simple Prompt List", + "CR Simple Prompt List Keyframes", + "CR Simple Prompt Scheduler", + "CR Simple Schedule", + "CR Simple Text Panel", + "CR Simple Text Scheduler", + "CR Simple Text Watermark", + "CR Simple Titles", + "CR Simple Value Scheduler", + "CR Split String", + "CR Starburst Colors", + "CR Starburst Lines", + "CR String To Boolean", + "CR String To Combo", + "CR String To Number", + "CR Style Bars", + "CR Switch Model and CLIP", + "CR Text", + "CR Text Blacklist", + "CR Text Concatenate", + "CR Text Cycler", + "CR Text Input Switch", + "CR Text Input Switch (4 way)", + "CR Text Length", + "CR Text List", + "CR Text List Simple", + "CR Text List To String", + "CR Text Operation", + "CR Text Replace", + "CR Text Scheduler", + "CR Thumbnail Preview", + "CR Trigger", + "CR Upscale Image", + "CR VAE Decode", + "CR VAE Input Switch", + "CR Value", + "CR Value Cycler", + "CR Value Scheduler", + "CR Vignette Filter", + "CR XY From Folder", + "CR XY Index", + "CR XY Interpolate", + "CR XY List", + "CR XY Product", + "CR XY Save Grid Image", + "CR XYZ Index", + "CR_Aspect Ratio For Print" + ], + { + "author": "Suzie1", + "description": "175 custom nodes for artists, designers and animators.", + "nickname": "Comfyroll Studio", + "title": "Comfyroll Studio", + "title_aux": "ComfyUI_Comfyroll_CustomNodes" + } + ], + "https://github.com/Sxela/ComfyWarp": [ + [ + "ExtractOpticalFlow", + "LoadFrame", + "LoadFrameFromDataset", + "LoadFrameFromFolder", + "LoadFramePairFromDataset", + "LoadFrameSequence", + "MakeFrameDataset", + "MixConsistencyMaps", + "OffsetNumber", + "ResizeToFit", + "SaveFrame", + "WarpFrame" + ], + { + "title_aux": "ComfyWarp" + } + ], + "https://github.com/TGu-97/ComfyUI-TGu-utils": [ + [ + "MPNReroute", + "MPNSwitch", + "PNSwitch" + ], + { + "title_aux": "TGu Utilities" + } + ], + "https://github.com/THtianhao/ComfyUI-FaceChain": [ + [ + "FC CropAndPaste", + "FC CropBottom", + "FC CropToOrigin", + "FC FaceDetectCrop", + "FC FaceFusion", + "FC FaceSegAndReplace", + "FC FaceSegment", + "FC MaskOP", + "FC RemoveCannyFace", + "FC ReplaceByMask", + "FC StyleLoraLoad" + ], + { + "title_aux": "ComfyUI-FaceChain" + } + ], + "https://github.com/THtianhao/ComfyUI-Portrait-Maker": [ + [ + "PM_BoxCropImage", + "PM_ColorTransfer", + "PM_ExpandMaskBox", + "PM_FaceFusion", + "PM_FaceShapMatch", + "PM_FaceSkin", + "PM_GetImageInfo", + "PM_ImageResizeTarget", + "PM_ImageScaleShort", + "PM_MakeUpTransfer", + "PM_MaskDilateErode", + "PM_MaskMerge2Image", + "PM_PortraitEnhancement", + "PM_RatioMerge2Image", + "PM_ReplaceBoxImg", + "PM_RetinaFace", + "PM_Similarity", + "PM_SkinRetouching", + "PM_SuperColorTransfer", + "PM_SuperMakeUpTransfer" + ], + { + "title_aux": "ComfyUI-Portrait-Maker" + } + ], + "https://github.com/TRI3D-LC/tri3d-comfyui-nodes": [ + [ + "tri3d-HistogramEqualization", + "tri3d-adjust-neck", + "tri3d-atr-parse", + "tri3d-atr-parse-batch", + "tri3d-clipdrop-bgremove-api", + "tri3d-dwpose", + "tri3d-extract-hand", + "tri3d-extract-parts-batch", + "tri3d-extract-parts-batch2", + "tri3d-extract-parts-mask-batch", + "tri3d-face-recognise", + "tri3d-float-to-image", + "tri3d-fuzzification", + "tri3d-image-mask-2-box", + "tri3d-image-mask-box-2-image", + "tri3d-interaction-canny", + "tri3d-load-pose-json", + "tri3d-pose-adaption", + "tri3d-pose-to-image", + "tri3d-position-hands", + "tri3d-position-parts-batch", + "tri3d-recolor-mask", + "tri3d-recolor-mask-LAB_space", + "tri3d-recolor-mask-LAB_space_manual", + "tri3d-recolor-mask-RGB_space", + "tri3d-skin-feathered-padded-mask", + "tri3d-swap-pixels" + ], + { + "title_aux": "tri3d-comfyui-nodes" + } + ], + "https://github.com/Taremin/comfyui-prompt-extranetworks": [ + [ + "PromptExtraNetworks" + ], + { + "title_aux": "ComfyUI Prompt ExtraNetworks" + } + ], + "https://github.com/Taremin/comfyui-string-tools": [ + [ + "StringToolsBalancedChoice", + "StringToolsConcat", + "StringToolsRandomChoice", + "StringToolsString", + "StringToolsText" + ], + { + "title_aux": "ComfyUI String Tools" + } + ], + "https://github.com/TeaCrab/ComfyUI-TeaNodes": [ + [ + "TC_ColorFill", + "TC_EqualizeCLAHE", + "TC_ImageResize", + "TC_ImageScale", + "TC_RandomColorFill", + "TC_SizeApproximation" + ], + { + "title_aux": "ComfyUI-TeaNodes" + } + ], + "https://github.com/TemryL/ComfyS3": [ + [ + "DownloadFileS3", + "LoadImageS3", + "SaveImageS3", + "SaveVideoFilesS3", + "UploadFileS3" + ], + { + "title_aux": "ComfyS3" + } + ], + "https://github.com/TheBarret/ZSuite": [ + [ + "ZSuite: Prompter", + "ZSuite: RF Noise", + "ZSuite: SeedMod" + ], + { + "title_aux": "ZSuite" + } + ], + "https://github.com/TinyTerra/ComfyUI_tinyterraNodes": [ + [ + "ttN busIN", + "ttN busOUT", + "ttN compareInput", + "ttN concat", + "ttN debugInput", + "ttN float", + "ttN hiresfixScale", + "ttN imageOutput", + "ttN imageREMBG", + "ttN int", + "ttN multiModelMerge", + "ttN pipe2BASIC", + "ttN pipe2DETAILER", + "ttN pipeEDIT", + "ttN pipeEncodeConcat", + "ttN pipeIN", + "ttN pipeKSampler", + "ttN pipeKSamplerAdvanced", + "ttN pipeKSamplerSDXL", + "ttN pipeLoader", + "ttN pipeLoaderSDXL", + "ttN pipeLoraStack", + "ttN pipeOUT", + "ttN seed", + "ttN seedDebug", + "ttN text", + "ttN text3BOX_3WAYconcat", + "ttN text7BOX_concat", + "ttN textDebug", + "ttN xyPlot" + ], + { + "author": "tinyterra", + "description": "This extension offers various pipe nodes, fullscreen image viewer based on node history, dynamic widgets, interface customization, and more.", + "nickname": "ttNodes", + "nodename_pattern": "^ttN ", + "title": "tinyterraNodes", + "title_aux": "tinyterraNodes" + } + ], + "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler": [ + [ + "menus" + ], + { + "title_aux": "ComfyUI_MileHighStyler" + } + ], + "https://github.com/Tropfchen/ComfyUI-Embedding_Picker": [ + [ + "EmbeddingPicker" + ], + { + "title_aux": "Embedding Picker" + } + ], + "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector": [ + [ + "YARS", + "YARSAdv" + ], + { + "title_aux": "YARS: Yet Another Resolution Selector" + } + ], + "https://github.com/Trung0246/ComfyUI-0246": [ + [ + "0246.Beautify", + "0246.BoxRange", + "0246.CastReroute", + "0246.Cloud", + "0246.Convert", + "0246.Count", + "0246.Highway", + "0246.HighwayBatch", + "0246.Hold", + "0246.Hub", + "0246.Junction", + "0246.JunctionBatch", + "0246.Loop", + "0246.Merge", + "0246.Meta", + "0246.Pick", + "0246.RandomInt", + "0246.Script", + "0246.ScriptNode", + "0246.ScriptPile", + "0246.ScriptRule", + "0246.Stringify", + "0246.Switch" + ], + { + "author": "Trung0246", + "description": "Random nodes for ComfyUI I made to solve my struggle with ComfyUI (ex: pipe, process). Have varying quality.", + "nickname": "ComfyUI-0246", + "title": "ComfyUI-0246", + "title_aux": "ComfyUI-0246" + } + ], + "https://github.com/Ttl/ComfyUi_NNLatentUpscale": [ + [ + "NNLatentUpscale" + ], + { + "title_aux": "ComfyUI Neural network latent upscale custom node" + } + ], + "https://github.com/Umikaze-job/select_folder_path_easy": [ + [ + "SelectFolderPathEasy" + ], + { + "title_aux": "select_folder_path_easy" + } + ], + "https://github.com/WASasquatch/ASTERR": [ + [ + "ASTERR", + "SaveASTERR" + ], + { + "title_aux": "ASTERR" + } + ], + "https://github.com/WASasquatch/ComfyUI_Preset_Merger": [ + [ + "Preset_Model_Merge" + ], + { + "title_aux": "ComfyUI Preset Merger" + } + ], + "https://github.com/WASasquatch/FreeU_Advanced": [ + [ + "FreeU (Advanced)", + "FreeU_V2 (Advanced)" + ], + { + "title_aux": "FreeU_Advanced" + } + ], + "https://github.com/WASasquatch/PPF_Noise_ComfyUI": [ + [ + "Blend Latents (PPF Noise)", + "Cross-Hatch Power Fractal (PPF Noise)", + "Images as Latents (PPF Noise)", + "Perlin Power Fractal Latent (PPF Noise)" + ], + { + "title_aux": "PPF_Noise_ComfyUI" + } + ], + "https://github.com/WASasquatch/PowerNoiseSuite": [ + [ + "Blend Latents (PPF Noise)", + "Cross-Hatch Power Fractal (PPF Noise)", + "Cross-Hatch Power Fractal Settings (PPF Noise)", + "Images as Latents (PPF Noise)", + "Latent Adjustment (PPF Noise)", + "Latents to CPU (PPF Noise)", + "Linear Cross-Hatch Power Fractal (PPF Noise)", + "Perlin Power Fractal Latent (PPF Noise)", + "Perlin Power Fractal Settings (PPF Noise)", + "Power KSampler Advanced (PPF Noise)", + "Power-Law Noise (PPF Noise)" + ], + { + "title_aux": "Power Noise Suite for ComfyUI" + } + ], + "https://github.com/WASasquatch/WAS_Extras": [ + [ + "BLVAEEncode", + "CLIPTextEncodeList", + "CLIPTextEncodeSequence2", + "ConditioningBlend", + "DebugInput", + "KSamplerSeq", + "KSamplerSeq2", + "VAEEncodeForInpaint (WAS)", + "VividSharpen" + ], + { + "title_aux": "WAS_Extras" + } + ], + "https://github.com/WASasquatch/was-node-suite-comfyui": [ + [ + "BLIP Analyze Image", + "BLIP Model Loader", + "Blend Latents", + "Boolean To Text", + "Bounded Image Blend", + "Bounded Image Blend with Mask", + "Bounded Image Crop", + "Bounded Image Crop with Mask", + "Bus Node", + "CLIP Input Switch", + "CLIP Vision Input Switch", + "CLIPSeg Batch Masking", + "CLIPSeg Masking", + "CLIPSeg Model Loader", + "CLIPTextEncode (BlenderNeko Advanced + NSP)", + "CLIPTextEncode (NSP)", + "Cache Node", + "Checkpoint Loader", + "Checkpoint Loader (Simple)", + "Conditioning Input Switch", + "Constant Number", + "Control Net Model Input Switch", + "Convert Masks to Images", + "Create Grid Image", + "Create Grid Image from Batch", + "Create Morph Image", + "Create Morph Image from Path", + "Create Video from Path", + "Debug Number to Console", + "Dictionary to Console", + "Diffusers Hub Model Down-Loader", + "Diffusers Model Loader", + "Export API", + "Image Analyze", + "Image Aspect Ratio", + "Image Batch", + "Image Blank", + "Image Blend", + "Image Blend by Mask", + "Image Blending Mode", + "Image Bloom Filter", + "Image Bounds", + "Image Bounds to Console", + "Image Canny Filter", + "Image Chromatic Aberration", + "Image Color Palette", + "Image Crop Face", + "Image Crop Location", + "Image Crop Square Location", + "Image Displacement Warp", + "Image Dragan Photography Filter", + "Image Edge Detection Filter", + "Image Film Grain", + "Image Filter Adjustments", + "Image Flip", + "Image Generate Gradient", + "Image Gradient Map", + "Image High Pass Filter", + "Image History Loader", + "Image Input Switch", + "Image Levels Adjustment", + "Image Load", + "Image Lucy Sharpen", + "Image Median Filter", + "Image Mix RGB Channels", + "Image Monitor Effects Filter", + "Image Nova Filter", + "Image Padding", + "Image Paste Crop", + "Image Paste Crop by Location", + "Image Paste Face", + "Image Perlin Noise", + "Image Perlin Power Fractal", + "Image Pixelate", + "Image Power Noise", + "Image Rembg (Remove Background)", + "Image Remove Background (Alpha)", + "Image Remove Color", + "Image Resize", + "Image Rotate", + "Image Rotate Hue", + "Image SSAO (Ambient Occlusion)", + "Image SSDO (Direct Occlusion)", + "Image Save", + "Image Seamless Texture", + "Image Select Channel", + "Image Select Color", + "Image Shadows and Highlights", + "Image Size to Number", + "Image Stitch", + "Image Style Filter", + "Image Threshold", + "Image Tiled", + "Image Transpose", + "Image Voronoi Noise Filter", + "Image fDOF Filter", + "Image to Latent Mask", + "Image to Noise", + "Image to Seed", + "Images to Linear", + "Images to RGB", + "Inset Image Bounds", + "Integer place counter", + "KSampler (WAS)", + "KSampler Cycle", + "Latent Batch", + "Latent Input Switch", + "Latent Noise Injection", + "Latent Size to Number", + "Latent Upscale by Factor (WAS)", + "Load Cache", + "Load Image Batch", + "Load Lora", + "Load Text File", + "Logic Boolean", + "Logic Boolean Primitive", + "Logic Comparison AND", + "Logic Comparison OR", + "Logic Comparison XOR", + "Logic NOT", + "Lora Input Switch", + "Lora Loader", + "Mask Arbitrary Region", + "Mask Batch", + "Mask Batch to Mask", + "Mask Ceiling Region", + "Mask Crop Dominant Region", + "Mask Crop Minority Region", + "Mask Crop Region", + "Mask Dilate Region", + "Mask Dominant Region", + "Mask Erode Region", + "Mask Fill Holes", + "Mask Floor Region", + "Mask Gaussian Region", + "Mask Invert", + "Mask Minority Region", + "Mask Paste Region", + "Mask Smooth Region", + "Mask Threshold Region", + "Masks Add", + "Masks Combine Batch", + "Masks Combine Regions", + "Masks Subtract", + "MiDaS Depth Approximation", + "MiDaS Mask Image", + "MiDaS Model Loader", + "Model Input Switch", + "Number Counter", + "Number Input Condition", + "Number Input Switch", + "Number Multiple Of", + "Number Operation", + "Number PI", + "Number to Float", + "Number to Int", + "Number to Seed", + "Number to String", + "Number to Text", + "Prompt Multiple Styles Selector", + "Prompt Styles Selector", + "Random Number", + "SAM Image Mask", + "SAM Model Loader", + "SAM Parameters", + "SAM Parameters Combine", + "Samples Passthrough (Stat System)", + "Save Text File", + "Seed", + "String to Text", + "Tensor Batch to Image", + "Text Add Token by Input", + "Text Add Tokens", + "Text Compare", + "Text Concatenate", + "Text Contains", + "Text Dictionary Convert", + "Text Dictionary Get", + "Text Dictionary Keys", + "Text Dictionary New", + "Text Dictionary To Text", + "Text Dictionary Update", + "Text File History Loader", + "Text Find and Replace", + "Text Find and Replace Input", + "Text Find and Replace by Dictionary", + "Text Input Switch", + "Text List", + "Text List Concatenate", + "Text List to Text", + "Text Load Line From File", + "Text Multiline", + "Text Parse A1111 Embeddings", + "Text Parse Noodle Soup Prompts", + "Text Parse Tokens", + "Text Random Line", + "Text Random Prompt", + "Text Shuffle", + "Text String", + "Text String Truncate", + "Text to Conditioning", + "Text to Console", + "Text to Number", + "Text to String", + "True Random.org Number Generator", + "Upscale Model Loader", + "Upscale Model Switch", + "VAE Input Switch", + "Video Dump Frames", + "Write to GIF", + "Write to Video", + "unCLIP Checkpoint Loader" + ], + { + "title_aux": "WAS Node Suite" + } + ], + "https://github.com/WebDev9000/WebDev9000-Nodes": [ + [ + "IgnoreBraces", + "SettingsSwitch" + ], + { + "title_aux": "WebDev9000-Nodes" + } + ], + "https://github.com/YMC-GitHub/ymc-node-suite-comfyui": [ + [ + "canvas-util-cal-size", + "conditioning-util-input-switch", + "cutoff-region-util", + "hks-util-cal-denoise-step", + "img-util-get-image-size", + "img-util-switch-input-image", + "io-image-save", + "io-text-save", + "io-util-file-list-get", + "io-util-file-list-get-text", + "number-util-random-num", + "pipe-util-to-basic-pipe", + "region-util-get-by-center-and-size", + "region-util-get-by-lt", + "region-util-get-crop-location-from-center-size-text", + "region-util-get-pad-out-location-by-size", + "text-preset-colors", + "text-util-join-text", + "text-util-loop-text", + "text-util-path-list", + "text-util-prompt-add-prompt", + "text-util-prompt-adv-dup", + "text-util-prompt-adv-search", + "text-util-prompt-del", + "text-util-prompt-dup", + "text-util-prompt-join", + "text-util-prompt-search", + "text-util-prompt-shuffle", + "text-util-prompt-std", + "text-util-prompt-unweight", + "text-util-random-text", + "text-util-search-text", + "text-util-show-text", + "text-util-switch-text", + "xyz-util-txt-to-int" + ], + { + "title_aux": "ymc-node-suite-comfyui" + } + ], + "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes": [ + [ + "Example", + "TacoAnimatedLoader", + "TacoGifMaker", + "TacoImg2ImgAnimatedLoader", + "TacoImg2ImgAnimatedProcessor", + "TacoLatent" + ], + { + "title_aux": "ComfyUI-TacoNodes" + } + ], + "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI": [ + [ + "MergeBlockWeighted" + ], + { + "title_aux": "MergeBlockWeighted_fo_ComfyUI" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery": [ + [ + "ArtGallery_Zho", + "ArtistsImage_Zho", + "CamerasImage_Zho", + "FilmsImage_Zho", + "MovementsImage_Zho", + "StylesImage_Zho" + ], + { + "title_aux": "ComfyUI-ArtGallery" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini": [ + [ + "ConcatText_Zho", + "DisplayText_Zho", + "Gemini_API_Chat_Zho", + "Gemini_API_S_Chat_Zho", + "Gemini_API_S_Vsion_ImgURL_Zho", + "Gemini_API_S_Zho", + "Gemini_API_Vsion_ImgURL_Zho", + "Gemini_API_Zho" + ], + { + "title_aux": "ComfyUI-Gemini" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID": [ + [ + "IDBaseModelLoader_fromhub", + "IDBaseModelLoader_local", + "IDControlNetLoader", + "IDGenerationNode", + "ID_Prompt_Styler", + "InsightFaceLoader_Zho", + "Ipadapter_instantidLoader" + ], + { + "title_aux": "ComfyUI-InstantID" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO": [ + [ + "BaseModel_Loader_fromhub", + "BaseModel_Loader_local", + "LoRALoader", + "NEW_PhotoMaker_Generation", + "PhotoMakerAdapter_Loader_fromhub", + "PhotoMakerAdapter_Loader_local", + "PhotoMaker_Generation", + "Prompt_Styler", + "Ref_Image_Preprocessing" + ], + { + "title_aux": "ComfyUI PhotoMaker (ZHO)" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align": [ + [ + "QAlign_Zho" + ], + { + "title_aux": "ComfyUI-Q-Align" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API": [ + [ + "QWenVL_API_S_Multi_Zho", + "QWenVL_API_S_Zho" + ], + { + "title_aux": "ComfyUI-Qwen-VL-API" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO": [ + [ + "SVD_Aspect_Ratio_Zho", + "SVD_Steps_MotionStrength_Seed_Zho", + "SVD_Styler_Zho" + ], + { + "title_aux": "ComfyUI-SVD-ZHO (WIP)" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE": [ + [ + "SMoE_Generation_Zho", + "SMoE_ModelLoader_Zho" + ], + { + "title_aux": "ComfyUI SegMoE" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite": [ + [ + "AlphaChanelAddByMask", + "ImageCompositeBy_BG_Zho", + "ImageCompositeBy_Zho", + "ImageComposite_BG_Zho", + "ImageComposite_Zho", + "RGB_Image_Zho", + "Text_Image_Frame_Zho", + "Text_Image_Multiline_Zho", + "Text_Image_Zho" + ], + { + "title_aux": "ComfyUI-Text_Image-Composite [WIP]" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-YoloWorld-EfficientSAM": [ + [ + "ESAM_ModelLoader_Zho", + "Yoloworld_ESAM_Zho", + "Yoloworld_ModelLoader_Zho" + ], + { + "title_aux": "ComfyUI YoloWorld-EfficientSAM" + } + ], + "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn": [ + [ + "PortraitMaster_\u4e2d\u6587\u7248" + ], + { + "title_aux": "comfyui-portrait-master-zh-cn" + } + ], + "https://github.com/ZaneA/ComfyUI-ImageReward": [ + [ + "ImageRewardLoader", + "ImageRewardScore" + ], + { + "title_aux": "ImageReward" + } + ], + "https://github.com/Zuellni/ComfyUI-ExLlama": [ + [ + "ZuellniExLlamaGenerator", + "ZuellniExLlamaLoader", + "ZuellniTextPreview", + "ZuellniTextReplace" + ], + { + "title_aux": "ComfyUI-ExLlama" + } + ], + "https://github.com/Zuellni/ComfyUI-PickScore-Nodes": [ + [ + "ZuellniPickScoreImageProcessor", + "ZuellniPickScoreLoader", + "ZuellniPickScoreSelector", + "ZuellniPickScoreTextProcessor" + ], + { + "title_aux": "ComfyUI PickScore Nodes" + } + ], + "https://github.com/a1lazydog/ComfyUI-AudioScheduler": [ + [ + "AmplitudeToGraph", + "AmplitudeToNumber", + "AudioToAmplitudeGraph", + "AudioToFFTs", + "BatchAmplitudeSchedule", + "ClipAmplitude", + "GateNormalizedAmplitude", + "LoadAudio", + "NormalizeAmplitude", + "NormalizedAmplitudeDrivenString", + "NormalizedAmplitudeToGraph", + "NormalizedAmplitudeToNumber", + "TransientAmplitudeBasic" + ], + { + "title_aux": "ComfyUI-AudioScheduler" + } + ], + "https://github.com/abdozmantar/ComfyUI-InstaSwap": [ + [ + "InstaSwapFaceSwap", + "InstaSwapLoadFaceModel", + "InstaSwapSaveFaceModel" + ], + { + "title_aux": "InstaSwap Face Swap Node for ComfyUI" + } + ], + "https://github.com/abyz22/image_control": [ + [ + "abyz22_Convertpipe", + "abyz22_Editpipe", + "abyz22_FirstNonNull", + "abyz22_FromBasicPipe_v2", + "abyz22_Frompipe", + "abyz22_ImpactWildcardEncode", + "abyz22_ImpactWildcardEncode_GetPrompt", + "abyz22_Ksampler", + "abyz22_Padding Image", + "abyz22_RemoveControlnet", + "abyz22_SaveImage", + "abyz22_SetQueue", + "abyz22_ToBasicPipe", + "abyz22_Topipe", + "abyz22_blend_onecolor", + "abyz22_blendimages", + "abyz22_bypass", + "abyz22_drawmask", + "abyz22_lamaInpaint", + "abyz22_lamaPreprocessor", + "abyz22_makecircles", + "abyz22_setimageinfo", + "abyz22_smallhead" + ], + { + "title_aux": "image_control" + } + ], + "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface": [ + [ + "DownloadLinkChecker", + "ShowFileNames" + ], + { + "title_aux": "ComfyUI-TrashNodes-DownloadHuggingface" + } + ], + "https://github.com/adieyal/comfyui-dynamicprompts": [ + [ + "DPCombinatorialGenerator", + "DPFeelingLucky", + "DPJinja", + "DPMagicPrompt", + "DPOutput", + "DPRandomGenerator" + ], + { + "title_aux": "DynamicPrompts Custom Nodes" + } + ], + "https://github.com/adriflex/ComfyUI_Blender_Texdiff": [ + [ + "ViewportColor", + "ViewportDepth" + ], + { + "title_aux": "ComfyUI_Blender_Texdiff" + } + ], + "https://github.com/aegis72/aegisflow_utility_nodes": [ + [ + "Add Text To Image", + "Aegisflow CLIP Pass", + "Aegisflow Conditioning Pass", + "Aegisflow Image Pass", + "Aegisflow Latent Pass", + "Aegisflow Mask Pass", + "Aegisflow Model Pass", + "Aegisflow Pos/Neg Pass", + "Aegisflow SDXL Tuple Pass", + "Aegisflow VAE Pass", + "Aegisflow controlnet preprocessor bus", + "Apply Instagram Filter", + "Brightness_Contrast_Ally", + "Flatten Colors", + "Gaussian Blur_Ally", + "GlitchThis Effect", + "Hue Rotation", + "Image Flip_ally", + "Placeholder Tuple", + "Swap Color Mode", + "aegisflow Multi_Pass", + "aegisflow Multi_Pass XL", + "af_pipe_in_15", + "af_pipe_in_xl", + "af_pipe_out_15", + "af_pipe_out_xl" + ], + { + "title_aux": "AegisFlow Utility Nodes" + } + ], + "https://github.com/aegis72/comfyui-styles-all": [ + [ + "menus" + ], + { + "title_aux": "ComfyUI-styles-all" + } + ], + "https://github.com/ai-liam/comfyui_liam_util": [ + [ + "LiamLoadImage" + ], + { + "title_aux": "LiamUtil" + } + ], + "https://github.com/aianimation55/ComfyUI-FatLabels": [ + [ + "FatLabels" + ], + { + "title_aux": "Comfy UI FatLabels" + } + ], + "https://github.com/alexopus/ComfyUI-Image-Saver": [ + [ + "Cfg Literal (Image Saver)", + "Checkpoint Loader with Name (Image Saver)", + "Float Literal (Image Saver)", + "Image Saver", + "Int Literal (Image Saver)", + "Sampler Selector (Image Saver)", + "Scheduler Selector (Image Saver)", + "Seed Generator (Image Saver)", + "String Literal (Image Saver)", + "Width/Height Literal (Image Saver)" + ], + { + "title_aux": "ComfyUI Image Saver" + } + ], + "https://github.com/alpertunga-bile/prompt-generator-comfyui": [ + [ + "Prompt Generator" + ], + { + "title_aux": "prompt-generator" + } + ], + "https://github.com/alsritter/asymmetric-tiling-comfyui": [ + [ + "Asymmetric_Tiling_KSampler" + ], + { + "title_aux": "asymmetric-tiling-comfyui" + } + ], + "https://github.com/alt-key-project/comfyui-dream-project": [ + [ + "Analyze Palette [Dream]", + "Beat Curve [Dream]", + "Big Float Switch [Dream]", + "Big Image Switch [Dream]", + "Big Int Switch [Dream]", + "Big Latent Switch [Dream]", + "Big Palette Switch [Dream]", + "Big Text Switch [Dream]", + "Boolean To Float [Dream]", + "Boolean To Int [Dream]", + "Build Prompt [Dream]", + "CSV Curve [Dream]", + "CSV Generator [Dream]", + "Calculation [Dream]", + "Common Frame Dimensions [Dream]", + "Compare Palettes [Dream]", + "FFMPEG Video Encoder [Dream]", + "File Count [Dream]", + "Finalize Prompt [Dream]", + "Float Input [Dream]", + "Float to Log Entry [Dream]", + "Frame Count Calculator [Dream]", + "Frame Counter (Directory) [Dream]", + "Frame Counter (Simple) [Dream]", + "Frame Counter Info [Dream]", + "Frame Counter Offset [Dream]", + "Frame Counter Time Offset [Dream]", + "Image Brightness Adjustment [Dream]", + "Image Color Shift [Dream]", + "Image Contrast Adjustment [Dream]", + "Image Motion [Dream]", + "Image Sequence Blend [Dream]", + "Image Sequence Loader [Dream]", + "Image Sequence Saver [Dream]", + "Image Sequence Tweening [Dream]", + "Int Input [Dream]", + "Int to Log Entry [Dream]", + "Laboratory [Dream]", + "Linear Curve [Dream]", + "Log Entry Joiner [Dream]", + "Log File [Dream]", + "Noise from Area Palettes [Dream]", + "Noise from Palette [Dream]", + "Palette Color Align [Dream]", + "Palette Color Shift [Dream]", + "Sample Image Area as Palette [Dream]", + "Sample Image as Palette [Dream]", + "Saw Curve [Dream]", + "Sine Curve [Dream]", + "Smooth Event Curve [Dream]", + "String Input [Dream]", + "String Tokenizer [Dream]", + "String to Log Entry [Dream]", + "Text Input [Dream]", + "Triangle Curve [Dream]", + "Triangle Event Curve [Dream]", + "WAV Curve [Dream]" + ], + { + "title_aux": "Dream Project Animation Nodes" + } + ], + "https://github.com/alt-key-project/comfyui-dream-video-batches": [ + [ + "Blended Transition [DVB]", + "Calculation [DVB]", + "Create Frame Set [DVB]", + "Divide [DVB]", + "Fade From Black [DVB]", + "Fade To Black [DVB]", + "Float Input [DVB]", + "For Each Done [DVB]", + "For Each Filename [DVB]", + "Frame Set Append [DVB]", + "Frame Set Frame Dimensions Scaled [DVB]", + "Frame Set Index Offset [DVB]", + "Frame Set Merger [DVB]", + "Frame Set Reindex [DVB]", + "Frame Set Repeat [DVB]", + "Frame Set Reverse [DVB]", + "Frame Set Split Beginning [DVB]", + "Frame Set Split End [DVB]", + "Frame Set Splitter [DVB]", + "Generate Inbetween Frames [DVB]", + "Int Input [DVB]", + "Linear Camera Pan [DVB]", + "Linear Camera Roll [DVB]", + "Linear Camera Zoom [DVB]", + "Load Image From Path [DVB]", + "Multiply [DVB]", + "Sine Camera Pan [DVB]", + "Sine Camera Roll [DVB]", + "Sine Camera Zoom [DVB]", + "String Input [DVB]", + "Text Input [DVB]", + "Trace Memory Allocation [DVB]", + "Unwrap Frame Set [DVB]" + ], + { + "title_aux": "Dream Video Batches" + } + ], + "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes": [ + [ + "CLIPTextEncode (RE)", + "CLIPTextEncodeSDXL (RE)", + "CLIPTextEncodeSDXLRefiner (RE)", + "Int (RE)", + "RErouter <=", + "RErouter =>", + "String (RE)" + ], + { + "title_aux": "ComfyUI_RErouter_CustomNodes" + } + ], + "https://github.com/andersxa/comfyui-PromptAttention": [ + [ + "CLIPAttentionMaskEncode" + ], + { + "title_aux": "CLIP Directional Prompt Attention" + } + ], + "https://github.com/antrobot1234/antrobots-comfyUI-nodepack": [ + [ + "composite", + "crop", + "paste", + "preview_mask", + "scale" + ], + { + "title_aux": "antrobots ComfyUI Nodepack" + } + ], + "https://github.com/asagi4/ComfyUI-CADS": [ + [ + "CADS" + ], + { + "title_aux": "ComfyUI-CADS" + } + ], + "https://github.com/asagi4/comfyui-prompt-control": [ + [ + "EditableCLIPEncode", + "FilterSchedule", + "LoRAScheduler", + "PCApplySettings", + "PCPromptFromSchedule", + "PCScheduleSettings", + "PCSplitSampling", + "PromptControlSimple", + "PromptToSchedule", + "ScheduleToCond", + "ScheduleToModel" + ], + { + "title_aux": "ComfyUI prompt control" + } + ], + "https://github.com/asagi4/comfyui-utility-nodes": [ + [ + "MUForceCacheClear", + "MUJinjaRender", + "MUSimpleWildcard" + ], + { + "title_aux": "asagi4/comfyui-utility-nodes" + } + ], + "https://github.com/aszc-dev/ComfyUI-CoreMLSuite": [ + [ + "Core ML Converter", + "Core ML LCM Converter", + "Core ML LoRA Loader", + "CoreMLModelAdapter", + "CoreMLSampler", + "CoreMLSamplerAdvanced", + "CoreMLUNetLoader" + ], + { + "title_aux": "Core ML Suite for ComfyUI" + } + ], + "https://github.com/avatechai/avatar-graph-comfyui": [ + [ + "ApplyMeshTransformAsShapeKey", + "B_ENUM", + "B_VECTOR3", + "B_VECTOR4", + "Combine Points", + "CreateShapeFlow", + "ExportBlendshapes", + "ExportGLTF", + "Extract Boundary Points", + "Image Alpha Mask Merge", + "ImageBridge", + "LoadImageFromRequest", + "LoadImageWithAlpha", + "LoadValueFromRequest", + "SAM MultiLayer", + "Save Image With Workflow" + ], + { + "author": "Avatech Limited", + "description": "Include nodes for sam + bpy operation, that allows workflow creations for generative 2d character rig.", + "nickname": "Avatar Graph", + "title": "Avatar Graph", + "title_aux": "avatar-graph-comfyui" + } + ], + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes": [ + [ + "HaojihuiClipScoreFakeImageProcessor", + "HaojihuiClipScoreImageProcessor", + "HaojihuiClipScoreImageScore", + "HaojihuiClipScoreLoader", + "HaojihuiClipScoreRealImageProcessor", + "HaojihuiClipScoreTextProcessor" + ], + { + "title_aux": "ComfyUI-ClipScore-Nodes" + } + ], + "https://github.com/badjeff/comfyui_lora_tag_loader": [ + [ + "LoraTagLoader" + ], + { + "title_aux": "LoRA Tag Loader for ComfyUI" + } + ], + "https://github.com/banodoco/steerable-motion": [ + [ + "BatchCreativeInterpolation" + ], + { + "title_aux": "Steerable Motion" + } + ], + "https://github.com/bash-j/mikey_nodes": [ + [ + "AddMetaData", + "Batch Crop Image", + "Batch Crop Resize Inplace", + "Batch Load Images", + "Batch Resize Image for SDXL", + "Checkpoint Loader Simple Mikey", + "CinematicLook", + "Empty Latent Ratio Custom SDXL", + "Empty Latent Ratio Select SDXL", + "EvalFloats", + "FaceFixerOpenCV", + "FileNamePrefix", + "FileNamePrefixDateDirFirst", + "Float to String", + "HaldCLUT", + "Image Caption", + "ImageBorder", + "ImageOverlay", + "ImagePaste", + "Int to String", + "LMStudioPrompt", + "Load Image Based on Number", + "LoraSyntaxProcessor", + "Mikey Sampler", + "Mikey Sampler Base Only", + "Mikey Sampler Base Only Advanced", + "Mikey Sampler Tiled", + "Mikey Sampler Tiled Base Only", + "MikeySamplerTiledAdvanced", + "MikeySamplerTiledAdvancedBaseOnly", + "OobaPrompt", + "PresetRatioSelector", + "Prompt With SDXL", + "Prompt With Style", + "Prompt With Style V2", + "Prompt With Style V3", + "Range Float", + "Range Integer", + "Ratio Advanced", + "Resize Image for SDXL", + "Save Image If True", + "Save Image With Prompt Data", + "Save Images Mikey", + "Save Images No Display", + "SaveMetaData", + "SearchAndReplace", + "Seed String", + "Style Conditioner", + "Style Conditioner Base Only", + "Text2InputOr3rdOption", + "TextCombinations", + "TextCombinations3", + "TextConcat", + "TextPreserve", + "Upscale Tile Calculator", + "Wildcard Processor", + "WildcardAndLoraSyntaxProcessor", + "WildcardOobaPrompt" + ], + { + "title_aux": "Mikey Nodes" + } + ], + "https://github.com/bedovyy/ComfyUI_NAIDGenerator": [ + [ + "GenerateNAID", + "Img2ImgOptionNAID", + "InpaintingOptionNAID", + "MaskImageToNAID", + "ModelOptionNAID", + "PromptToNAID" + ], + { + "title_aux": "ComfyUI_NAIDGenerator" + } + ], + "https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py": [ + [ + "CLIPSeg", + "CombineSegMasks" + ], + { + "title_aux": "CLIPSeg" + } + ], + "https://github.com/bilal-arikan/ComfyUI_TextAssets": [ + [ + "LoadTextAsset" + ], + { + "title_aux": "ComfyUI_TextAssets" + } + ], + "https://github.com/blepping/ComfyUI-bleh": [ + [ + "BlehDeepShrink", + "BlehDiscardPenultimateSigma", + "BlehForceSeedSampler", + "BlehHyperTile", + "BlehInsaneChainSampler", + "BlehModelPatchConditional" + ], + { + "title_aux": "ComfyUI-bleh" + } + ], + "https://github.com/blepping/ComfyUI-sonar": [ + [ + "NoisyLatentLike", + "SamplerSonarDPMPPSDE", + "SamplerSonarEuler", + "SamplerSonarEulerA", + "SonarCustomNoise", + "SonarGuidanceConfig" + ], + { + "title_aux": "ComfyUI-sonar" + } + ], + "https://github.com/bmad4ever/comfyui_ab_samplercustom": [ + [ + "AB SamplerCustom (experimental)" + ], + { + "title_aux": "comfyui_ab_sampler" + } + ], + "https://github.com/bmad4ever/comfyui_bmad_nodes": [ + [ + "AdaptiveThresholding", + "Add String To Many", + "AddAlpha", + "AdjustRect", + "AnyToAny", + "BoundingRect (contours)", + "BuildColorRangeAdvanced (hsv)", + "BuildColorRangeHSV (hsv)", + "CLAHE", + "CLIPEncodeMultiple", + "CLIPEncodeMultipleAdvanced", + "ChameleonMask", + "CheckpointLoader (dirty)", + "CheckpointLoaderSimple (dirty)", + "Color (RGB)", + "Color (hexadecimal)", + "Color Clip", + "Color Clip (advanced)", + "Color Clip ADE20k", + "ColorDictionary", + "ColorDictionary (custom)", + "Conditioning (combine multiple)", + "Conditioning (combine selective)", + "Conditioning Grid (cond)", + "Conditioning Grid (string)", + "Conditioning Grid (string) Advanced", + "Contour To Mask", + "Contours", + "ControlNetHadamard", + "ControlNetHadamard (manual)", + "ConvertImg", + "CopyMakeBorder", + "CreateRequestMetadata", + "DistanceTransform", + "Draw Contour(s)", + "EqualizeHistogram", + "ExtendColorList", + "ExtendCondList", + "ExtendFloatList", + "ExtendImageList", + "ExtendIntList", + "ExtendLatentList", + "ExtendMaskList", + "ExtendModelList", + "ExtendStringList", + "FadeMaskEdges", + "Filter Contour", + "FindComplementaryColor", + "FindThreshold", + "FlatLatentsIntoSingleGrid", + "Framed Mask Grab Cut", + "Framed Mask Grab Cut 2", + "FromListGet1Color", + "FromListGet1Cond", + "FromListGet1Float", + "FromListGet1Image", + "FromListGet1Int", + "FromListGet1Latent", + "FromListGet1Mask", + "FromListGet1Model", + "FromListGet1String", + "FromListGetColors", + "FromListGetConds", + "FromListGetFloats", + "FromListGetImages", + "FromListGetInts", + "FromListGetLatents", + "FromListGetMasks", + "FromListGetModels", + "FromListGetStrings", + "Get Contour from list", + "Get Models", + "Get Prompt", + "HypernetworkLoader (dirty)", + "ImageBatchToList", + "InRange (hsv)", + "Inpaint", + "Input/String to Int Array", + "KMeansColor", + "Load 64 Encoded Image", + "LoraLoader (dirty)", + "MaskGrid N KSamplers Advanced", + "MaskOuterBlur", + "Merge Latent Batch Gridwise", + "MonoMerge", + "MorphologicOperation", + "MorphologicSkeletoning", + "NaiveAutoKMeansColor", + "OtsuThreshold", + "RGB to HSV", + "Rect Grab Cut", + "Remap", + "RemapBarrelDistortion", + "RemapFromInsideParabolas", + "RemapFromQuadrilateral (homography)", + "RemapInsideParabolas", + "RemapInsideParabolasAdvanced", + "RemapPinch", + "RemapReverseBarrelDistortion", + "RemapStretch", + "RemapToInnerCylinder", + "RemapToOuterCylinder", + "RemapToQuadrilateral", + "RemapWarpPolar", + "Repeat Into Grid (image)", + "Repeat Into Grid (latent)", + "RequestInputs", + "SampleColorHSV", + "Save Image (api)", + "SeamlessClone", + "SeamlessClone (simple)", + "SetRequestStateToComplete", + "String", + "String to Float", + "String to Integer", + "ToColorList", + "ToCondList", + "ToFloatList", + "ToImageList", + "ToIntList", + "ToLatentList", + "ToMaskList", + "ToModelList", + "ToStringList", + "UnGridify (image)", + "VAEEncodeBatch" + ], + { + "title_aux": "Bmad Nodes" + } + ], + "https://github.com/bmad4ever/comfyui_lists_cartesian_product": [ + [ + "AnyListCartesianProduct" + ], + { + "title_aux": "Lists Cartesian Product" + } + ], + "https://github.com/bradsec/ComfyUI_ResolutionSelector": [ + [ + "ResolutionSelector" + ], + { + "title_aux": "ResolutionSelector for ComfyUI" + } + ], + "https://github.com/braintacles/braintacles-comfyui-nodes": [ + [ + "CLIPTextEncodeSDXL-Multi-IO", + "CLIPTextEncodeSDXL-Pipe", + "Empty Latent Image from Aspect-Ratio", + "Random Find and Replace", + "VAE Decode Pipe", + "VAE Decode Tiled Pipe", + "VAE Encode Pipe", + "VAE Encode Tiled Pipe" + ], + { + "title_aux": "braintacles-nodes" + } + ], + "https://github.com/brianfitzgerald/style_aligned_comfy": [ + [ + "StyleAlignedBatchAlign", + "StyleAlignedReferenceSampler", + "StyleAlignedSampleReferenceLatents" + ], + { + "title_aux": "StyleAligned for ComfyUI" + } + ], + "https://github.com/bronkula/comfyui-fitsize": [ + [ + "FS: Crop Image Into Even Pieces", + "FS: Fit Image And Resize", + "FS: Fit Size From Image", + "FS: Fit Size From Int", + "FS: Image Region To Mask", + "FS: Load Image And Resize To Fit", + "FS: Pick Image From Batch", + "FS: Pick Image From Batches", + "FS: Pick Image From List" + ], + { + "title_aux": "comfyui-fitsize" + } + ], + "https://github.com/bruefire/ComfyUI-SeqImageLoader": [ + [ + "VFrame Loader With Mask Editor", + "Video Loader With Mask Editor" + ], + { + "title_aux": "ComfyUI Sequential Image Loader" + } + ], + "https://github.com/budihartono/comfyui_otonx_nodes": [ + [ + "OTX Integer Multiple Inputs 4", + "OTX Integer Multiple Inputs 5", + "OTX Integer Multiple Inputs 6", + "OTX KSampler Feeder", + "OTX Versatile Multiple Inputs 4", + "OTX Versatile Multiple Inputs 5", + "OTX Versatile Multiple Inputs 6" + ], + { + "title_aux": "Otonx's Custom Nodes" + } + ], + "https://github.com/bvhari/ComfyUI_ImageProcessing": [ + [ + "BilateralFilter", + "Brightness", + "Gamma", + "Hue", + "Saturation", + "SigmoidCorrection", + "UnsharpMask" + ], + { + "title_aux": "ImageProcessing" + } + ], + "https://github.com/bvhari/ComfyUI_LatentToRGB": [ + [ + "LatentToRGB" + ], + { + "title_aux": "LatentToRGB" + } + ], + "https://github.com/bvhari/ComfyUI_PerpWeight": [ + [ + "CLIPTextEncodePerpWeight" + ], + { + "title_aux": "ComfyUI_PerpWeight" + } + ], + "https://github.com/catscandrive/comfyui-imagesubfolders/raw/main/loadImageWithSubfolders.py": [ + [ + "LoadImagewithSubfolders" + ], + { + "title_aux": "Image loader with subfolders" + } + ], + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/google_translator.py": [ + [ + "GoogleTranslator" + ], + { + "title_aux": "ComfyUI SimpleTools Suit" + } + ], + "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner": [ + [ + "LlavaCaptioner" + ], + { + "title_aux": "ComfyUI LLaVA Captioner" + } + ], + "https://github.com/chaojie/ComfyUI-DragNUWA": [ + [ + "BrushMotion", + "CompositeMotionBrush", + "CompositeMotionBrushWithoutModel", + "DragNUWA Run", + "DragNUWA Run MotionBrush", + "Get First Image", + "Get Last Image", + "InstantCameraMotionBrush", + "InstantObjectMotionBrush", + "Load CheckPoint DragNUWA", + "Load MotionBrush From Optical Flow", + "Load MotionBrush From Optical Flow Directory", + "Load MotionBrush From Optical Flow Without Model", + "Load MotionBrush From Tracking Points", + "Load MotionBrush From Tracking Points Without Model", + "Load Pose KeyPoints", + "Loop", + "LoopEnd_IMAGE", + "LoopStart_IMAGE", + "Split Tracking Points" + ], + { + "title_aux": "ComfyUI-DragNUWA" + } + ], + "https://github.com/chaojie/ComfyUI-DynamiCrafter": [ + [ + "DynamiCrafter Simple", + "DynamiCrafterLoader" + ], + { + "title_aux": "ComfyUI-DynamiCrafter" + } + ], + "https://github.com/chaojie/ComfyUI-I2VGEN-XL": [ + [ + "I2VGEN-XL Simple", + "Modelscope Pipeline Loader" + ], + { + "title_aux": "ComfyUI-I2VGEN-XL" + } + ], + "https://github.com/chaojie/ComfyUI-LightGlue": [ + [ + "LightGlue Loader", + "LightGlue Simple", + "LightGlue Simple Multi" + ], + { + "title_aux": "ComfyUI-LightGlue" + } + ], + "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone": [ + [ + "Moore-AnimateAnyone Denoising Unet", + "Moore-AnimateAnyone Image Encoder", + "Moore-AnimateAnyone Pipeline Loader", + "Moore-AnimateAnyone Pose Guider", + "Moore-AnimateAnyone Reference Unet", + "Moore-AnimateAnyone Simple", + "Moore-AnimateAnyone VAE" + ], + { + "title_aux": "ComfyUI-Moore-AnimateAnyone" + } + ], + "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor": [ + [ + "Motion Vector Extractor", + "VideoCombineThenPath" + ], + { + "title_aux": "ComfyUI-Motion-Vector-Extractor" + } + ], + "https://github.com/chaojie/ComfyUI-MotionCtrl": [ + [ + "Load Motion Camera Preset", + "Load Motion Traj Preset", + "Load Motionctrl Checkpoint", + "Motionctrl Cond", + "Motionctrl Sample", + "Motionctrl Sample Simple", + "Select Image Indices" + ], + { + "title_aux": "ComfyUI-MotionCtrl" + } + ], + "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD": [ + [ + "Load Motionctrl-SVD Camera Preset", + "Load Motionctrl-SVD Checkpoint", + "Motionctrl-SVD Sample Simple" + ], + { + "title_aux": "ComfyUI-MotionCtrl-SVD" + } + ], + "https://github.com/chaojie/ComfyUI-Panda3d": [ + [ + "Panda3dAmbientLight", + "Panda3dAttachNewNode", + "Panda3dBase", + "Panda3dDirectionalLight", + "Panda3dLoadDepthModel", + "Panda3dLoadModel", + "Panda3dLoadTexture", + "Panda3dModelMerge", + "Panda3dTest", + "Panda3dTextureMerge" + ], + { + "title_aux": "ComfyUI-Panda3d" + } + ], + "https://github.com/chaojie/ComfyUI-Pymunk": [ + [ + "PygameRun", + "PygameSurface", + "PymunkDynamicBox", + "PymunkDynamicCircle", + "PymunkRun", + "PymunkShapeMerge", + "PymunkSpace", + "PymunkStaticLine" + ], + { + "title_aux": "ComfyUI-Pymunk" + } + ], + "https://github.com/chaojie/ComfyUI-RAFT": [ + [ + "Load MotionBrush", + "RAFT Run", + "Save MotionBrush", + "VizMotionBrush" + ], + { + "title_aux": "ComfyUI-RAFT" + } + ], + "https://github.com/chflame163/ComfyUI_LayerStyle": [ + [ + "LayerColor: Brightness & Contrast", + "LayerColor: ColorAdapter", + "LayerColor: Exposure", + "LayerColor: Gamma", + "LayerColor: HSV", + "LayerColor: LAB", + "LayerColor: LUT Apply", + "LayerColor: RGB", + "LayerColor: YUV", + "LayerFilter: ChannelShake", + "LayerFilter: ColorMap", + "LayerFilter: GaussianBlur", + "LayerFilter: MotionBlur", + "LayerFilter: Sharp & Soft", + "LayerFilter: SkinBeauty", + "LayerFilter: SoftLight", + "LayerFilter: WaterColor", + "LayerMask: CreateGradientMask", + "LayerMask: MaskBoxDetect", + "LayerMask: MaskByDifferent", + "LayerMask: MaskEdgeShrink", + "LayerMask: MaskEdgeUltraDetail", + "LayerMask: MaskGradient", + "LayerMask: MaskGrow", + "LayerMask: MaskInvert", + "LayerMask: MaskMotionBlur", + "LayerMask: MaskPreview", + "LayerMask: MaskStroke", + "LayerMask: PixelSpread", + "LayerMask: RemBgUltra", + "LayerMask: SegmentAnythingUltra", + "LayerStyle: ColorOverlay", + "LayerStyle: DropShadow", + "LayerStyle: GradientOverlay", + "LayerStyle: InnerGlow", + "LayerStyle: InnerShadow", + "LayerStyle: OuterGlow", + "LayerStyle: Stroke", + "LayerUtility: ColorImage", + "LayerUtility: ColorPicker", + "LayerUtility: CropByMask", + "LayerUtility: ExtendCanvas", + "LayerUtility: GetColorTone", + "LayerUtility: GetImageSize", + "LayerUtility: GradientImage", + "LayerUtility: ImageBlend", + "LayerUtility: ImageBlendAdvance", + "LayerUtility: ImageChannelMerge", + "LayerUtility: ImageChannelSplit", + "LayerUtility: ImageMaskScaleAs", + "LayerUtility: ImageOpacity", + "LayerUtility: ImageScaleByAspectRatio", + "LayerUtility: ImageScaleRestore", + "LayerUtility: ImageShift", + "LayerUtility: LayerImageTransform", + "LayerUtility: LayerMaskTransform", + "LayerUtility: PrintInfo", + "LayerUtility: RestoreCropBox", + "LayerUtility: TextImage", + "LayerUtility: XY to Percent" + ], + { + "title_aux": "ComfyUI Layer Style" + } + ], + "https://github.com/chflame163/ComfyUI_MSSpeech_TTS": [ + [ + "Input Trigger", + "MicrosoftSpeech_TTS", + "Play Sound", + "Play Sound (loop)" + ], + { + "title_aux": "ComfyUI_MSSpeech_TTS" + } + ], + "https://github.com/chflame163/ComfyUI_WordCloud": [ + [ + "ComfyWordCloud", + "LoadTextFile", + "RGB_Picker" + ], + { + "title_aux": "ComfyUI_WordCloud" + } + ], + "https://github.com/chibiace/ComfyUI-Chibi-Nodes": [ + [ + "ConditionText", + "ConditionTextMulti", + "ImageAddText", + "ImageSimpleResize", + "ImageSizeInfo", + "ImageTool", + "Int2String", + "LoadEmbedding", + "LoadImageExtended", + "Loader", + "Prompts", + "RandomResolutionLatent", + "SaveImages", + "SeedGenerator", + "SimpleSampler", + "TextSplit", + "Textbox", + "Wildcards" + ], + { + "title_aux": "ComfyUI-Chibi-Nodes" + } + ], + "https://github.com/chrisgoringe/cg-image-picker": [ + [ + "Preview Chooser", + "Preview Chooser Fabric" + ], + { + "author": "chrisgoringe", + "description": "Custom nodes that preview images and pause the workflow to allow the user to select one or more to progress", + "nickname": "Image Chooser", + "title": "Image Chooser", + "title_aux": "Image chooser" + } + ], + "https://github.com/chrisgoringe/cg-noise": [ + [ + "Hijack", + "KSampler Advanced with Variations", + "KSampler with Variations", + "UnHijack" + ], + { + "title_aux": "Variation seeds" + } + ], + "https://github.com/chrisgoringe/cg-use-everywhere": [ + [ + "Seed Everywhere" + ], + { + "nodename_pattern": "(^(Prompts|Anything) Everywhere|Simple String)", + "title_aux": "Use Everywhere (UE Nodes)" + } + ], + "https://github.com/city96/ComfyUI_ColorMod": [ + [ + "ColorModEdges", + "ColorModPivot", + "LoadImageHighPrec", + "PreviewImageHighPrec", + "SaveImageHighPrec" + ], + { + "title_aux": "ComfyUI_ColorMod" + } + ], + "https://github.com/city96/ComfyUI_DiT": [ + [ + "DiTCheckpointLoader", + "DiTCheckpointLoaderSimple", + "DiTLabelCombine", + "DiTLabelSelect", + "DiTSampler" + ], + { + "title_aux": "ComfyUI_DiT [WIP]" + } + ], + "https://github.com/city96/ComfyUI_ExtraModels": [ + [ + "DiTCondLabelEmpty", + "DiTCondLabelSelect", + "DitCheckpointLoader", + "ExtraVAELoader", + "PixArtCheckpointLoader", + "PixArtDPMSampler", + "PixArtLoraLoader", + "PixArtResolutionSelect", + "PixArtT5TextEncode", + "T5TextEncode", + "T5v11Loader" + ], + { + "title_aux": "Extra Models for ComfyUI" + } + ], + "https://github.com/city96/ComfyUI_NetDist": [ + [ + "CombineImageBatch", + "FetchRemote", + "LoadCurrentWorkflowJSON", + "LoadDiskWorkflowJSON", + "LoadImageUrl", + "LoadLatentNumpy", + "LoadLatentUrl", + "RemoteChainEnd", + "RemoteChainStart", + "RemoteQueueSimple", + "RemoteQueueWorker", + "SaveDiskWorkflowJSON", + "SaveImageUrl", + "SaveLatentNumpy" + ], + { + "title_aux": "ComfyUI_NetDist" + } + ], + "https://github.com/city96/SD-Advanced-Noise": [ + [ + "LatentGaussianNoise", + "MathEncode" + ], + { + "title_aux": "SD-Advanced-Noise" + } + ], + "https://github.com/city96/SD-Latent-Interposer": [ + [ + "LatentInterposer" + ], + { + "title_aux": "Latent-Interposer" + } + ], + "https://github.com/city96/SD-Latent-Upscaler": [ + [ + "LatentUpscaler" + ], + { + "title_aux": "SD-Latent-Upscaler" + } + ], + "https://github.com/civitai/comfy-nodes": [ + [ + "CivitAI_Checkpoint_Loader", + "CivitAI_Lora_Loader" + ], + { + "title_aux": "comfy-nodes" + } + ], + "https://github.com/comfyanonymous/ComfyUI": [ + [ + "BasicScheduler", + "CLIPLoader", + "CLIPMergeSimple", + "CLIPSave", + "CLIPSetLastLayer", + "CLIPTextEncode", + "CLIPTextEncodeControlnet", + "CLIPTextEncodeSDXL", + "CLIPTextEncodeSDXLRefiner", + "CLIPVisionEncode", + "CLIPVisionLoader", + "Canny", + "CheckpointLoader", + "CheckpointLoaderSimple", + "CheckpointSave", + "ConditioningAverage", + "ConditioningCombine", + "ConditioningConcat", + "ConditioningSetArea", + "ConditioningSetAreaPercentage", + "ConditioningSetAreaStrength", + "ConditioningSetMask", + "ConditioningSetTimestepRange", + "ConditioningZeroOut", + "ControlNetApply", + "ControlNetApplyAdvanced", + "ControlNetLoader", + "CropMask", + "DiffControlNetLoader", + "DiffusersLoader", + "DualCLIPLoader", + "EmptyImage", + "EmptyLatentImage", + "ExponentialScheduler", + "FeatherMask", + "FlipSigmas", + "FreeU", + "FreeU_V2", + "GLIGENLoader", + "GLIGENTextBoxApply", + "GrowMask", + "HyperTile", + "HypernetworkLoader", + "ImageBatch", + "ImageBlend", + "ImageBlur", + "ImageColorToMask", + "ImageCompositeMasked", + "ImageCrop", + "ImageFromBatch", + "ImageInvert", + "ImageOnlyCheckpointLoader", + "ImageOnlyCheckpointSave", + "ImagePadForOutpaint", + "ImageQuantize", + "ImageScale", + "ImageScaleBy", + "ImageScaleToTotalPixels", + "ImageSharpen", + "ImageToMask", + "ImageUpscaleWithModel", + "InpaintModelConditioning", + "InvertMask", + "JoinImageWithAlpha", + "KSampler", + "KSamplerAdvanced", + "KSamplerSelect", + "KarrasScheduler", + "LatentAdd", + "LatentBatch", + "LatentBatchSeedBehavior", + "LatentBlend", + "LatentComposite", + "LatentCompositeMasked", + "LatentCrop", + "LatentFlip", + "LatentFromBatch", + "LatentInterpolate", + "LatentMultiply", + "LatentRotate", + "LatentSubtract", + "LatentUpscale", + "LatentUpscaleBy", + "LoadImage", + "LoadImageMask", + "LoadLatent", + "LoraLoader", + "LoraLoaderModelOnly", + "MaskComposite", + "MaskToImage", + "ModelMergeAdd", + "ModelMergeBlocks", + "ModelMergeSimple", + "ModelMergeSubtract", + "ModelSamplingContinuousEDM", + "ModelSamplingDiscrete", + "ModelSamplingStableCascade", + "PatchModelAddDownscale", + "PerpNeg", + "PhotoMakerEncode", + "PhotoMakerLoader", + "PolyexponentialScheduler", + "PorterDuffImageComposite", + "PreviewImage", + "RebatchImages", + "RebatchLatents", + "RepeatImageBatch", + "RepeatLatentBatch", + "RescaleCFG", + "SDTurboScheduler", + "SD_4XUpscale_Conditioning", + "SVD_img2vid_Conditioning", + "SamplerCustom", + "SamplerDPMPP_2M_SDE", + "SamplerDPMPP_SDE", + "SaveAnimatedPNG", + "SaveAnimatedWEBP", + "SaveImage", + "SaveLatent", + "SelfAttentionGuidance", + "SetLatentNoiseMask", + "SolidMask", + "SplitImageWithAlpha", + "SplitSigmas", + "StableCascade_EmptyLatentImage", + "StableCascade_StageB_Conditioning", + "StableCascade_StageC_VAEEncode", + "StableZero123_Conditioning", + "StableZero123_Conditioning_Batched", + "StyleModelApply", + "StyleModelLoader", + "TomePatchModel", + "UNETLoader", + "UpscaleModelLoader", + "VAEDecode", + "VAEDecodeTiled", + "VAEEncode", + "VAEEncodeForInpaint", + "VAEEncodeTiled", + "VAELoader", + "VAESave", + "VPScheduler", + "VideoLinearCFGGuidance", + "unCLIPCheckpointLoader", + "unCLIPConditioning" + ], + { + "title_aux": "ComfyUI" + } + ], + "https://github.com/comfyanonymous/ComfyUI_experiments": [ + [ + "ModelMergeBlockNumber", + "ModelMergeSDXL", + "ModelMergeSDXLDetailedTransformers", + "ModelMergeSDXLTransformers", + "ModelSamplerTonemapNoiseTest", + "ReferenceOnlySimple", + "RescaleClassifierFreeGuidanceTest", + "TonemapNoiseWithRescaleCFG" + ], + { + "title_aux": "ComfyUI_experiments" + } + ], + "https://github.com/concarne000/ConCarneNode": [ + [ + "BingImageGrabber", + "Zephyr" + ], + { + "title_aux": "ConCarneNode" + } + ], + "https://github.com/coreyryanhanson/ComfyQR": [ + [ + "comfy-qr-by-image-size", + "comfy-qr-by-module-size", + "comfy-qr-by-module-split", + "comfy-qr-mask_errors" + ], + { + "title_aux": "ComfyQR" + } + ], + "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes": [ + [ + "comfy-qr-read", + "comfy-qr-validate" + ], + { + "title_aux": "ComfyQR-scanning-nodes" + } + ], + "https://github.com/cubiq/ComfyUI_IPAdapter_plus": [ + [ + "IPAdapterApply", + "IPAdapterApplyEncoded", + "IPAdapterApplyFaceID", + "IPAdapterBatchEmbeds", + "IPAdapterEncoder", + "IPAdapterLoadEmbeds", + "IPAdapterModelLoader", + "IPAdapterSaveEmbeds", + "IPAdapterTilesMasked", + "InsightFaceLoader", + "PrepImageForClipVision", + "PrepImageForInsightFace" + ], + { + "title_aux": "ComfyUI_IPAdapter_plus" + } + ], + "https://github.com/cubiq/ComfyUI_InstantID": [ + [ + "ApplyInstantID", + "ApplyInstantIDAdvanced", + "FaceKeypointsPreprocessor", + "InstantIDFaceAnalysis", + "InstantIDModelLoader" + ], + { + "title_aux": "ComfyUI InstantID (Native Support)" + } + ], + "https://github.com/cubiq/ComfyUI_SimpleMath": [ + [ + "SimpleMath", + "SimpleMathDebug" + ], + { + "title_aux": "Simple Math" + } + ], + "https://github.com/cubiq/ComfyUI_essentials": [ + [ + "BatchCount+", + "CLIPTextEncodeSDXL+", + "ConsoleDebug+", + "DebugTensorShape+", + "DrawText+", + "ExtractKeyframes+", + "GetImageSize+", + "ImageApplyLUT+", + "ImageCASharpening+", + "ImageCompositeFromMaskBatch+", + "ImageCrop+", + "ImageDesaturate+", + "ImageEnhanceDifference+", + "ImageExpandBatch+", + "ImageFlip+", + "ImageFromBatch+", + "ImagePosterize+", + "ImageRemoveBackground+", + "ImageResize+", + "ImageSeamCarving+", + "KSamplerVariationsStochastic+", + "KSamplerVariationsWithNoise+", + "MaskBatch+", + "MaskBlur+", + "MaskExpandBatch+", + "MaskFlip+", + "MaskFromBatch+", + "MaskFromColor+", + "MaskPreview+", + "ModelCompile+", + "NoiseFromImage~", + "RemBGSession+", + "RemoveLatentMask+", + "SDXLEmptyLatentSizePicker+", + "SimpleMath+", + "TransitionMask+" + ], + { + "title_aux": "ComfyUI Essentials" + } + ], + "https://github.com/dagthomas/comfyui_dagthomas": [ + [ + "CSL", + "CSVPromptGenerator", + "PromptGenerator" + ], + { + "title_aux": "SDXL Auto Prompter" + } + ], + "https://github.com/daniel-lewis-ab/ComfyUI-Llama": [ + [ + "Call LLM Advanced", + "Call LLM Basic", + "LLM_Create_Completion Advanced", + "LLM_Detokenize", + "LLM_Embed", + "LLM_Eval", + "LLM_Load_State", + "LLM_Reset", + "LLM_Sample", + "LLM_Save_State", + "LLM_Token_BOS", + "LLM_Token_EOS", + "LLM_Tokenize", + "Load LLM Model Advanced", + "Load LLM Model Basic" + ], + { + "title_aux": "ComfyUI-Llama" + } + ], + "https://github.com/daniel-lewis-ab/ComfyUI-TTS": [ + [ + "Load_Piper_Model", + "Piper_Speak_Text" + ], + { + "title_aux": "ComfyUI-TTS" + } + ], + "https://github.com/darkpixel/darkprompts": [ + [ + "DarkCombine", + "DarkFaceIndexShuffle", + "DarkLoRALoader", + "DarkPrompt" + ], + { + "title_aux": "DarkPrompts" + } + ], + "https://github.com/davask/ComfyUI-MarasIT-Nodes": [ + [ + "MarasitBusNode", + "MarasitBusPipeNode", + "MarasitPipeNodeBasic", + "MarasitUniversalBusNode" + ], + { + "title_aux": "MarasIT Nodes" + } + ], + "https://github.com/dave-palt/comfyui_DSP_imagehelpers": [ + [ + "dsp-imagehelpers-concat" + ], + { + "title_aux": "comfyui_DSP_imagehelpers" + } + ], + "https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py": [ + [ + "KSamplerAdvancedGPU", + "KSamplerGPU" + ], + { + "title_aux": "KSampler GPU" + } + ], + "https://github.com/daxthin/DZ-FaceDetailer": [ + [ + "DZ_Face_Detailer" + ], + { + "title_aux": "DZ-FaceDetailer" + } + ], + "https://github.com/deroberon/StableZero123-comfyui": [ + [ + "SDZero ImageSplit", + "Stablezero123", + "Stablezero123WithDepth" + ], + { + "title_aux": "StableZero123-comfyui" + } + ], + "https://github.com/deroberon/demofusion-comfyui": [ + [ + "Batch Unsampler", + "Demofusion", + "Demofusion From Single File", + "Iterative Mixing KSampler" + ], + { + "title_aux": "demofusion-comfyui" + } + ], + "https://github.com/dfl/comfyui-clip-with-break": [ + [ + "AdvancedCLIPTextEncodeWithBreak", + "CLIPTextEncodeWithBreak" + ], + { + "author": "dfl", + "description": "CLIP text encoder that does BREAK prompting like A1111", + "nickname": "CLIP with BREAK", + "title": "CLIP with BREAK syntax", + "title_aux": "comfyui-clip-with-break" + } + ], + "https://github.com/digitaljohn/comfyui-propost": [ + [ + "ProPostApplyLUT", + "ProPostDepthMapBlur", + "ProPostFilmGrain", + "ProPostRadialBlur", + "ProPostVignette" + ], + { + "title_aux": "ComfyUI-ProPost" + } + ], + "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector": [ + [ + "PixelArtAddDitherPattern", + "PixelArtDetectorConverter", + "PixelArtDetectorSave", + "PixelArtDetectorToImage", + "PixelArtLoadPalettes" + ], + { + "title_aux": "ComfyUI PixelArt Detector" + } + ], + "https://github.com/diontimmer/ComfyUI-Vextra-Nodes": [ + [ + "Add Text To Image", + "Apply Instagram Filter", + "Create Solid Color", + "Flatten Colors", + "Generate Noise Image", + "GlitchThis Effect", + "Hue Rotation", + "Load Picture Index", + "Pixel Sort", + "Play Sound At Execution", + "Prettify Prompt Using distilgpt2", + "Swap Color Mode" + ], + { + "title_aux": "ComfyUI-Vextra-Nodes" + } + ], + "https://github.com/djbielejeski/a-person-mask-generator": [ + [ + "APersonMaskGenerator" + ], + { + "title_aux": "a-person-mask-generator" + } + ], + "https://github.com/dmarx/ComfyUI-AudioReactive": [ + [ + "OpAbs", + "OpBandpass", + "OpClamp", + "OpHarmonic", + "OpModulo", + "OpNormalize", + "OpNovelty", + "OpPercussive", + "OpPow", + "OpPow2", + "OpPredominant_pulse", + "OpQuantize", + "OpRms", + "OpSmoosh", + "OpSmooth", + "OpSqrt", + "OpStretch", + "OpSustain", + "OpThreshold" + ], + { + "title_aux": "ComfyUI-AudioReactive" + } + ], + "https://github.com/dmarx/ComfyUI-Keyframed": [ + [ + "Example", + "KfAddCurveToPGroup", + "KfAddCurveToPGroupx10", + "KfApplyCurveToCond", + "KfConditioningAdd", + "KfConditioningAddx10", + "KfCurveConstant", + "KfCurveDraw", + "KfCurveFromString", + "KfCurveFromYAML", + "KfCurveInverse", + "KfCurveToAcnLatentKeyframe", + "KfCurvesAdd", + "KfCurvesAddx10", + "KfCurvesDivide", + "KfCurvesMultiply", + "KfCurvesMultiplyx10", + "KfCurvesSubtract", + "KfDebug_Clip", + "KfDebug_Cond", + "KfDebug_Curve", + "KfDebug_Float", + "KfDebug_Image", + "KfDebug_Int", + "KfDebug_Latent", + "KfDebug_Model", + "KfDebug_Passthrough", + "KfDebug_Segs", + "KfDebug_String", + "KfDebug_Vae", + "KfDrawSchedule", + "KfEvaluateCurveAtT", + "KfGetCurveFromPGroup", + "KfGetScheduleConditionAtTime", + "KfGetScheduleConditionSlice", + "KfKeyframedCondition", + "KfKeyframedConditionWithText", + "KfPGroupCurveAdd", + "KfPGroupCurveMultiply", + "KfPGroupDraw", + "KfPGroupProd", + "KfPGroupSum", + "KfSetCurveLabel", + "KfSetKeyframe", + "KfSinusoidalAdjustAmplitude", + "KfSinusoidalAdjustFrequency", + "KfSinusoidalAdjustPhase", + "KfSinusoidalAdjustWavelength", + "KfSinusoidalEntangledZeroOneFromFrequencyx2", + "KfSinusoidalEntangledZeroOneFromFrequencyx3", + "KfSinusoidalEntangledZeroOneFromFrequencyx4", + "KfSinusoidalEntangledZeroOneFromFrequencyx5", + "KfSinusoidalEntangledZeroOneFromFrequencyx6", + "KfSinusoidalEntangledZeroOneFromFrequencyx7", + "KfSinusoidalEntangledZeroOneFromFrequencyx8", + "KfSinusoidalEntangledZeroOneFromFrequencyx9", + "KfSinusoidalEntangledZeroOneFromWavelengthx2", + "KfSinusoidalEntangledZeroOneFromWavelengthx3", + "KfSinusoidalEntangledZeroOneFromWavelengthx4", + "KfSinusoidalEntangledZeroOneFromWavelengthx5", + "KfSinusoidalEntangledZeroOneFromWavelengthx6", + "KfSinusoidalEntangledZeroOneFromWavelengthx7", + "KfSinusoidalEntangledZeroOneFromWavelengthx8", + "KfSinusoidalEntangledZeroOneFromWavelengthx9", + "KfSinusoidalGetAmplitude", + "KfSinusoidalGetFrequency", + "KfSinusoidalGetPhase", + "KfSinusoidalGetWavelength", + "KfSinusoidalWithFrequency", + "KfSinusoidalWithWavelength" + ], + { + "title_aux": "ComfyUI-Keyframed" + } + ], + "https://github.com/drago87/ComfyUI_Dragos_Nodes": [ + [ + "file_padding", + "image_info", + "lora_loader", + "vae_loader" + ], + { + "title_aux": "ComfyUI_Dragos_Nodes" + } + ], + "https://github.com/drustan-hawk/primitive-types": [ + [ + "float", + "int", + "string", + "string_multiline" + ], + { + "title_aux": "primitive-types" + } + ], + "https://github.com/ealkanat/comfyui_easy_padding": [ + [ + "comfyui-easy-padding" + ], + { + "title_aux": "ComfyUI Easy Padding" + } + ], + "https://github.com/edenartlab/eden_comfy_pipelines": [ + [ + "CLIP_Interrogator", + "Eden_Bool", + "Eden_Compare", + "Eden_DebugPrint", + "Eden_Float", + "Eden_Int", + "Eden_String", + "Filepicker", + "IMG_blender", + "IMG_padder", + "IMG_scaler", + "IMG_unpadder", + "If ANY execute A else B", + "LatentTypeConversion", + "SaveImageAdvanced", + "VAEDecode_to_folder" + ], + { + "title_aux": "eden_comfy_pipelines" + } + ], + "https://github.com/evanspearman/ComfyMath": [ + [ + "CM_BoolBinaryOperation", + "CM_BoolToInt", + "CM_BoolUnaryOperation", + "CM_BreakoutVec2", + "CM_BreakoutVec3", + "CM_BreakoutVec4", + "CM_ComposeVec2", + "CM_ComposeVec3", + "CM_ComposeVec4", + "CM_FloatBinaryCondition", + "CM_FloatBinaryOperation", + "CM_FloatToInt", + "CM_FloatToNumber", + "CM_FloatUnaryCondition", + "CM_FloatUnaryOperation", + "CM_IntBinaryCondition", + "CM_IntBinaryOperation", + "CM_IntToBool", + "CM_IntToFloat", + "CM_IntToNumber", + "CM_IntUnaryCondition", + "CM_IntUnaryOperation", + "CM_NearestSDXLResolution", + "CM_NumberBinaryCondition", + "CM_NumberBinaryOperation", + "CM_NumberToFloat", + "CM_NumberToInt", + "CM_NumberUnaryCondition", + "CM_NumberUnaryOperation", + "CM_SDXLResolution", + "CM_Vec2BinaryCondition", + "CM_Vec2BinaryOperation", + "CM_Vec2ScalarOperation", + "CM_Vec2ToScalarBinaryOperation", + "CM_Vec2ToScalarUnaryOperation", + "CM_Vec2UnaryCondition", + "CM_Vec2UnaryOperation", + "CM_Vec3BinaryCondition", + "CM_Vec3BinaryOperation", + "CM_Vec3ScalarOperation", + "CM_Vec3ToScalarBinaryOperation", + "CM_Vec3ToScalarUnaryOperation", + "CM_Vec3UnaryCondition", + "CM_Vec3UnaryOperation", + "CM_Vec4BinaryCondition", + "CM_Vec4BinaryOperation", + "CM_Vec4ScalarOperation", + "CM_Vec4ToScalarBinaryOperation", + "CM_Vec4ToScalarUnaryOperation", + "CM_Vec4UnaryCondition", + "CM_Vec4UnaryOperation" + ], + { + "title_aux": "ComfyMath" + } + ], + "https://github.com/fearnworks/ComfyUI_FearnworksNodes/raw/main/fw_nodes.py": [ + [ + "Count Files in Directory (FW)", + "Count Tokens (FW)", + "Token Count Ranker(FW)", + "Trim To Tokens (FW)" + ], + { + "title_aux": "Fearnworks Custom Nodes" + } + ], + "https://github.com/fexli/fexli-util-node-comfyui": [ + [ + "FEBCPrompt", + "FEBatchGenStringBCDocker", + "FEColor2Image", + "FEColorOut", + "FEDataInsertor", + "FEDataPacker", + "FEDataUnpacker", + "FEDeepClone", + "FEDictPacker", + "FEDictUnpacker", + "FEEncLoraLoader", + "FEExtraInfoAdd", + "FEGenStringBCDocker", + "FEGenStringGPT", + "FEImageNoiseGenerate", + "FEImagePadForOutpaint", + "FEImagePadForOutpaintByImage", + "FEOperatorIf", + "FEPythonStrOp", + "FERandomBool", + "FERandomLoraSelect", + "FERandomPrompt", + "FERandomizedColor2Image", + "FERandomizedColorOut", + "FERerouteWithName", + "FESaveEncryptImage", + "FETextCombine", + "FETextCombine2Any", + "FETextInput" + ], + { + "title_aux": "fexli-util-node-comfyui" + } + ], + "https://github.com/filipemeneses/comfy_pixelization": [ + [ + "Pixelization" + ], + { + "title_aux": "Pixelization" + } + ], + "https://github.com/filliptm/ComfyUI_Fill-Nodes": [ + [ + "FL_ImageCaptionSaver", + "FL_ImageRandomizer" + ], + { + "title_aux": "ComfyUI_Fill-Nodes" + } + ], + "https://github.com/fitCorder/fcSuite/raw/main/fcSuite.py": [ + [ + "fcFloat", + "fcFloatMatic", + "fcHex", + "fcInteger" + ], + { + "title_aux": "fcSuite" + } + ], + "https://github.com/florestefano1975/comfyui-portrait-master": [ + [ + "PortraitMaster" + ], + { + "title_aux": "comfyui-portrait-master" + } + ], + "https://github.com/florestefano1975/comfyui-prompt-composer": [ + [ + "PromptComposerCustomLists", + "PromptComposerEffect", + "PromptComposerGrouping", + "PromptComposerMerge", + "PromptComposerStyler", + "PromptComposerTextSingle", + "promptComposerTextMultiple" + ], + { + "title_aux": "comfyui-prompt-composer" + } + ], + "https://github.com/flowtyone/ComfyUI-Flowty-LDSR": [ + [ + "LDSRModelLoader", + "LDSRUpscale", + "LDSRUpscaler" + ], + { + "title_aux": "ComfyUI-Flowty-LDSR" + } + ], + "https://github.com/flyingshutter/As_ComfyUI_CustomNodes": [ + [ + "BatchIndex_AS", + "CropImage_AS", + "ImageMixMasked_As", + "ImageToMask_AS", + "Increment_AS", + "Int2Any_AS", + "LatentAdd_AS", + "LatentMixMasked_As", + "LatentMix_AS", + "LatentToImages_AS", + "LoadLatent_AS", + "MapRange_AS", + "MaskToImage_AS", + "Math_AS", + "NoiseImage_AS", + "Number2Float_AS", + "Number2Int_AS", + "Number_AS", + "SaveLatent_AS", + "TextToImage_AS", + "TextWildcardList_AS" + ], + { + "title_aux": "As_ComfyUI_CustomNodes" + } + ], + "https://github.com/foxtrot-roger/comfyui-rf-nodes": [ + [ + "LogBool", + "LogFloat", + "LogInt", + "LogNumber", + "LogString", + "LogVec2", + "LogVec3", + "RF_AtIndexString", + "RF_BoolToString", + "RF_FloatToString", + "RF_IntToString", + "RF_JsonStyleLoader", + "RF_MergeLines", + "RF_NumberToString", + "RF_OptionsString", + "RF_RangeFloat", + "RF_RangeInt", + "RF_RangeNumber", + "RF_SavePromptInfo", + "RF_SplitLines", + "RF_TextConcatenate", + "RF_TextInput", + "RF_TextReplace", + "RF_Timestamp", + "RF_ToString", + "RF_Vec2ToString", + "RF_Vec3ToString", + "TextLine" + ], + { + "title_aux": "RF Nodes" + } + ], + "https://github.com/gemell1/ComfyUI_GMIC": [ + [ + "GmicCliWrapper" + ], + { + "title_aux": "ComfyUI_GMIC" + } + ], + "https://github.com/giriss/comfy-image-saver": [ + [ + "Cfg Literal", + "Checkpoint Selector", + "Int Literal", + "Sampler Selector", + "Save Image w/Metadata", + "Scheduler Selector", + "Seed Generator", + "String Literal", + "Width/Height Literal" + ], + { + "title_aux": "Save Image with Generation Metadata" + } + ], + "https://github.com/glibsonoran/Plush-for-ComfyUI": [ + [ + "DalleImage", + "Enhancer", + "ImgTextSwitch", + "Plush-Exif Wrangler", + "mulTextSwitch" + ], + { + "title_aux": "Plush-for-ComfyUI" + } + ], + "https://github.com/glifxyz/ComfyUI-GlifNodes": [ + [ + "GlifConsistencyDecoder", + "GlifPatchConsistencyDecoderTiled", + "SDXLAspectRatio" + ], + { + "title_aux": "ComfyUI-GlifNodes" + } + ], + "https://github.com/glowcone/comfyui-base64-to-image": [ + [ + "LoadImageFromBase64" + ], + { + "title_aux": "Load Image From Base64 URI" + } + ], + "https://github.com/godspede/ComfyUI_Substring": [ + [ + "SubstringTheory" + ], + { + "title_aux": "ComfyUI Substring" + } + ], + "https://github.com/gokayfem/ComfyUI_VLM_nodes": [ + [ + "Internlm", + "Joytag", + "JsonToText", + "KeywordExtraction", + "LLMLoader", + "LLMPromptGenerator", + "LLMSampler", + "LLava Loader Simple", + "LLavaPromptGenerator", + "LLavaSamplerAdvanced", + "LLavaSamplerSimple", + "LlavaClipLoader", + "MoonDream", + "PromptGenerateAPI", + "SimpleText", + "Suggester", + "ViewText" + ], + { + "title_aux": "VLM_nodes" + } + ], + "https://github.com/guoyk93/yk-node-suite-comfyui": [ + [ + "YKImagePadForOutpaint", + "YKMaskToImage" + ], + { + "title_aux": "y.k.'s ComfyUI node suite" + } + ], + "https://github.com/hhhzzyang/Comfyui_Lama": [ + [ + "LamaApply", + "LamaModelLoader", + "YamlConfigLoader" + ], + { + "title_aux": "Comfyui-Lama" + } + ], + "https://github.com/hinablue/ComfyUI_3dPoseEditor": [ + [ + "Hina.PoseEditor3D" + ], + { + "title_aux": "ComfyUI 3D Pose Editor" + } + ], + "https://github.com/hustille/ComfyUI_Fooocus_KSampler": [ + [ + "KSampler With Refiner (Fooocus)" + ], + { + "title_aux": "ComfyUI_Fooocus_KSampler" + } + ], + "https://github.com/hustille/ComfyUI_hus_utils": [ + [ + "3way Prompt Styler", + "Batch State", + "Date Time Format", + "Debug Extra", + "Fetch widget value", + "Text Hash" + ], + { + "title_aux": "hus' utils for ComfyUI" + } + ], + "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo": [ + [ + "EagleImageNode", + "SDXLPromptStyler", + "SDXLPromptStylerAdvanced", + "SDXLResolutionPresets" + ], + { + "title_aux": "Eagle PNGInfo" + } + ], + "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words": [ + [ + "FusionText", + "LoraListNames", + "LoraLoaderAdvanced", + "LoraLoaderStackedAdvanced", + "LoraLoaderStackedVanilla", + "LoraLoaderVanilla", + "LoraTagsOnly", + "Randomizer", + "TagsFormater", + "TagsSelector", + "TextInputBasic" + ], + { + "title_aux": "ComfyUI-Lora-Auto-Trigger-Words" + } + ], + "https://github.com/imb101/ComfyUI-FaceSwap": [ + [ + "FaceSwapNode" + ], + { + "title_aux": "FaceSwap" + } + ], + "https://github.com/jags111/ComfyUI_Jags_Audiotools": [ + [ + "BatchJoinAudio", + "BatchToList", + "BitCrushAudioFX", + "BulkVariation", + "ChorusAudioFX", + "ClippingAudioFX", + "CompressorAudioFX", + "ConcatAudioList", + "ConvolutionAudioFX", + "CutAudio", + "DelayAudioFX", + "DistortionAudioFX", + "DuplicateAudio", + "GainAudioFX", + "GenerateAudioSample", + "GenerateAudioWave", + "GetAudioFromFolderIndex", + "GetSingle", + "GetStringByIndex", + "HighShelfFilter", + "HighpassFilter", + "ImageToSpectral", + "InvertAudioFX", + "JoinAudio", + "LadderFilter", + "LimiterAudioFX", + "ListToBatch", + "LoadAudioDir", + "LoadAudioFile", + "LoadAudioModel (DD)", + "LoadVST3", + "LowShelfFilter", + "LowpassFilter", + "MP3CompressorAudioFX", + "MixAudioTensors", + "NoiseGateAudioFX", + "OTTAudioFX", + "PeakFilter", + "PhaserEffectAudioFX", + "PitchShiftAudioFX", + "PlotSpectrogram", + "PreviewAudioFile", + "PreviewAudioTensor", + "ResampleAudio", + "ReverbAudioFX", + "ReverseAudio", + "SaveAudioTensor", + "SequenceVariation", + "SliceAudio", + "SoundPlayer", + "StretchAudio", + "samplerate" + ], + { + "author": "jags111", + "description": "This extension offers various audio generation tools", + "nickname": "Audiotools", + "title": "Jags_Audiotools", + "title_aux": "ComfyUI_Jags_Audiotools" + } + ], + "https://github.com/jags111/ComfyUI_Jags_VectorMagic": [ + [ + "CircularVAEDecode", + "JagsCLIPSeg", + "JagsClipseg", + "JagsCombineMasks", + "SVG", + "YoloSEGdetectionNode", + "YoloSegNode", + "color_drop", + "my unique name", + "xy_Tiling_KSampler" + ], + { + "author": "jags111", + "description": "This extension offers various vector manipulation and generation tools", + "nickname": "Jags_VectorMagic", + "title": "Jags_VectorMagic", + "title_aux": "ComfyUI_Jags_VectorMagic" + } + ], + "https://github.com/jags111/efficiency-nodes-comfyui": [ + [ + "AnimateDiff Script", + "Apply ControlNet Stack", + "Control Net Stacker", + "Eff. Loader SDXL", + "Efficient Loader", + "HighRes-Fix Script", + "Image Overlay", + "Join XY Inputs of Same Type", + "KSampler (Efficient)", + "KSampler Adv. (Efficient)", + "KSampler SDXL (Eff.)", + "LatentUpscaler", + "LoRA Stack to String converter", + "LoRA Stacker", + "Manual XY Entry Info", + "NNLatentUpscale", + "Noise Control Script", + "Pack SDXL Tuple", + "Tiled Upscaler Script", + "Unpack SDXL Tuple", + "XY Input: Add/Return Noise", + "XY Input: Aesthetic Score", + "XY Input: CFG Scale", + "XY Input: Checkpoint", + "XY Input: Clip Skip", + "XY Input: Control Net", + "XY Input: Control Net Plot", + "XY Input: Denoise", + "XY Input: LoRA", + "XY Input: LoRA Plot", + "XY Input: LoRA Stacks", + "XY Input: Manual XY Entry", + "XY Input: Prompt S/R", + "XY Input: Refiner On/Off", + "XY Input: Sampler/Scheduler", + "XY Input: Seeds++ Batch", + "XY Input: Steps", + "XY Input: VAE", + "XY Plot" + ], + { + "title_aux": "Efficiency Nodes for ComfyUI Version 2.0+" + } + ], + "https://github.com/jamal-alkharrat/ComfyUI_rotate_image": [ + [ + "RotateImage" + ], + { + "title_aux": "ComfyUI_rotate_image" + } + ], + "https://github.com/jamesWalker55/comfyui-various": [ + [], + { + "nodename_pattern": "^JW", + "title_aux": "Various ComfyUI Nodes by Type" + } + ], + "https://github.com/jesenzhang/ComfyUI_StreamDiffusion": [ + [ + "StreamDiffusion_Loader", + "StreamDiffusion_Sampler" + ], + { + "title_aux": "ComfyUI_StreamDiffusion" + } + ], + "https://github.com/jitcoder/lora-info": [ + [ + "ImageFromURL", + "LoraInfo" + ], + { + "title_aux": "LoraInfo" + } + ], + "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes": [ + [ + "JjkConcat", + "JjkShowText", + "JjkText", + "SDXLRecommendedImageSize" + ], + { + "title_aux": "ComfyUI-Jjk-Nodes" + } + ], + "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative": [ + [ + "LCMScheduler", + "SamplerLCMAlternative", + "SamplerLCMCycle" + ], + { + "title_aux": "ComfyUI-sampler-lcm-alternative" + } + ], + "https://github.com/jordoh/ComfyUI-Deepface": [ + [ + "DeepfaceExtractFaces", + "DeepfaceVerify" + ], + { + "title_aux": "ComfyUI Deepface" + } + ], + "https://github.com/jtrue/ComfyUI-JaRue": [ + [ + "Text2Image_jru", + "YouTube2Prompt_jru" + ], + { + "nodename_pattern": "_jru$", + "title_aux": "ComfyUI-JaRue" + } + ], + "https://github.com/ka-puna/comfyui-yanc": [ + [ + "YANC.ConcatStrings", + "YANC.FormatDatetimeString", + "YANC.GetWidgetValueString", + "YANC.IntegerCaster", + "YANC.MultilineString", + "YANC.TruncateString" + ], + { + "title_aux": "comfyui-yanc" + } + ], + "https://github.com/kadirnar/ComfyUI-Transformers": [ + [ + "DepthEstimationPipeline", + "ImageClassificationPipeline", + "ImageSegmentationPipeline", + "ObjectDetectionPipeline" + ], + { + "title_aux": "ComfyUI-Transformers" + } + ], + "https://github.com/kenjiqq/qq-nodes-comfyui": [ + [ + "Any List", + "Axis Pack", + "Axis Unpack", + "Image Accumulator End", + "Image Accumulator Start", + "Load Lines From Text File", + "Slice List", + "Text Splitter", + "XY Grid Helper" + ], + { + "title_aux": "qq-nodes-comfyui" + } + ], + "https://github.com/kft334/Knodes": [ + [ + "Image(s) To Websocket (Base64)", + "ImageOutput", + "Load Image (Base64)", + "Load Images (Base64)" + ], + { + "title_aux": "Knodes" + } + ], + "https://github.com/kijai/ComfyUI-ADMotionDirector": [ + [ + "ADMD_AdditionalModelSelect", + "ADMD_CheckpointLoader", + "ADMD_DiffusersLoader", + "ADMD_InitializeTraining", + "ADMD_LoadLora", + "ADMD_SaveLora", + "ADMD_TrainLora", + "ADMD_ValidationSampler", + "ADMD_ValidationSettings" + ], + { + "title_aux": "Animatediff MotionLoRA Trainer" + } + ], + "https://github.com/kijai/ComfyUI-CCSR": [ + [ + "CCSR_Model_Select", + "CCSR_Upscale" + ], + { + "title_aux": "ComfyUI-CCSR" + } + ], + "https://github.com/kijai/ComfyUI-DDColor": [ + [ + "DDColor_Colorize" + ], + { + "title_aux": "ComfyUI-DDColor" + } + ], + "https://github.com/kijai/ComfyUI-KJNodes": [ + [ + "AddLabel", + "BatchCLIPSeg", + "BatchCropFromMask", + "BatchCropFromMaskAdvanced", + "BatchUncrop", + "BatchUncropAdvanced", + "BboxToInt", + "ColorMatch", + "ColorToMask", + "CondPassThrough", + "ConditioningMultiCombine", + "ConditioningSetMaskAndCombine", + "ConditioningSetMaskAndCombine3", + "ConditioningSetMaskAndCombine4", + "ConditioningSetMaskAndCombine5", + "CreateAudioMask", + "CreateFadeMask", + "CreateFadeMaskAdvanced", + "CreateFluidMask", + "CreateGradientMask", + "CreateMagicMask", + "CreateShapeMask", + "CreateTextMask", + "CreateVoronoiMask", + "CrossFadeImages", + "DummyLatentOut", + "EffnetEncode", + "EmptyLatentImagePresets", + "FilterZeroMasksAndCorrespondingImages", + "FlipSigmasAdjusted", + "FloatConstant", + "GLIGENTextBoxApplyBatch", + "GenerateNoise", + "GetImageRangeFromBatch", + "GetImagesFromBatchIndexed", + "GetLatentsFromBatchIndexed", + "GrowMaskWithBlur", + "INTConstant", + "ImageBatchRepeatInterleaving", + "ImageBatchTestPattern", + "ImageConcanate", + "ImageGrabPIL", + "ImageGridComposite2x2", + "ImageGridComposite3x3", + "ImageTransformByNormalizedAmplitude", + "ImageUpscaleWithModelBatched", + "InjectNoiseToLatent", + "InsertImageBatchByIndexes", + "NormalizeLatent", + "NormalizedAmplitudeToMask", + "OffsetMask", + "OffsetMaskByNormalizedAmplitude", + "ReferenceOnlySimple3", + "ReplaceImagesInBatch", + "ResizeMask", + "ReverseImageBatch", + "RoundMask", + "SaveImageWithAlpha", + "ScaleBatchPromptSchedule", + "SomethingToString", + "SoundReactive", + "SplitBboxes", + "StableZero123_BatchSchedule", + "StringConstant", + "VRAM_Debug", + "WidgetToString" + ], + { + "title_aux": "KJNodes for ComfyUI" + } + ], + "https://github.com/kijai/ComfyUI-Marigold": [ + [ + "ColorizeDepthmap", + "MarigoldDepthEstimation", + "RemapDepth", + "SaveImageOpenEXR" + ], + { + "title_aux": "Marigold depth estimation in ComfyUI" + } + ], + "https://github.com/kijai/ComfyUI-SVD": [ + [ + "SVDimg2vid" + ], + { + "title_aux": "ComfyUI-SVD" + } + ], + "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink": [ + [ + "GradientPatchModelAddDownscale", + "GradientPatchModelAddDownscaleAdvanced" + ], + { + "title_aux": "ComfyUI_GradientDeepShrink" + } + ], + "https://github.com/kinfolk0117/ComfyUI_Pilgram": [ + [ + "Pilgram" + ], + { + "title_aux": "ComfyUI_Pilgram" + } + ], + "https://github.com/kinfolk0117/ComfyUI_SimpleTiles": [ + [ + "DynamicTileMerge", + "DynamicTileSplit", + "TileCalc", + "TileMerge", + "TileSplit" + ], + { + "title_aux": "SimpleTiles" + } + ], + "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter": [ + [ + "TiledIPAdapter" + ], + { + "title_aux": "TiledIPAdapter" + } + ], + "https://github.com/knuknX/ComfyUI-Image-Tools": [ + [ + "BatchImagePathLoader", + "ImageBgRemoveProcessor", + "ImageCheveretoUploader", + "ImageStandardResizeProcessor", + "JSONMessageNotifyTool", + "PreviewJSONNode", + "SingleImagePathLoader", + "SingleImageUrlLoader" + ], + { + "title_aux": "ComfyUI-Image-Tools" + } + ], + "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI": [ + [ + "LLLiteLoader" + ], + { + "title_aux": "ControlNet-LLLite-ComfyUI" + } + ], + "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes": [ + [ + "S3 Bucket LoRA", + "S3Bucket_Load_LoRA", + "XL DreamBooth LoRA", + "XLDB_LoRA" + ], + { + "title_aux": "ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes" + } + ], + "https://github.com/komojini/komojini-comfyui-nodes": [ + [ + "BatchCreativeInterpolationNodeDynamicSettings", + "CachedGetter", + "DragNUWAImageCanvas", + "FlowBuilder", + "FlowBuilder (adv)", + "FlowBuilder (advanced)", + "FlowBuilder (advanced) Setter", + "FlowBuilderSetter", + "FlowBuilderSetter (adv)", + "Getter", + "ImageCropByRatio", + "ImageCropByRatioAndResize", + "ImageGetter", + "ImageMerger", + "ImagesCropByRatioAndResizeBatch", + "KSamplerAdvancedCacheable", + "KSamplerCacheable", + "Setter", + "UltimateVideoLoader", + "UltimateVideoLoader (simple)", + "YouTubeVideoLoader" + ], + { + "title_aux": "komojini-comfyui-nodes" + } + ], + "https://github.com/kwaroran/abg-comfyui": [ + [ + "Remove Image Background (abg)" + ], + { + "title_aux": "abg-comfyui" + } + ], + "https://github.com/laksjdjf/LCMSampler-ComfyUI": [ + [ + "SamplerLCM", + "TAESDLoader" + ], + { + "title_aux": "LCMSampler-ComfyUI" + } + ], + "https://github.com/laksjdjf/LoRA-Merger-ComfyUI": [ + [ + "LoraLoaderFromWeight", + "LoraLoaderWeightOnly", + "LoraMerge", + "LoraSave" + ], + { + "title_aux": "LoRA-Merger-ComfyUI" + } + ], + "https://github.com/laksjdjf/attention-couple-ComfyUI": [ + [ + "Attention couple" + ], + { + "title_aux": "attention-couple-ComfyUI" + } + ], + "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI": [ + [ + "CDTuner", + "Negapip", + "Negpip" + ], + { + "title_aux": "cd-tuner_negpip-ComfyUI" + } + ], + "https://github.com/laksjdjf/pfg-ComfyUI": [ + [ + "PFG" + ], + { + "title_aux": "pfg-ComfyUI" + } + ], + "https://github.com/lilly1987/ComfyUI_node_Lilly": [ + [ + "CheckpointLoaderSimpleText", + "LoraLoaderText", + "LoraLoaderTextRandom", + "Random_Sampler", + "VAELoaderDecode" + ], + { + "title_aux": "simple wildcard for ComfyUI" + } + ], + "https://github.com/lldacing/comfyui-easyapi-nodes": [ + [ + "Base64ToImage", + "Base64ToMask", + "ImageToBase64", + "ImageToBase64Advanced", + "LoadImageFromURL", + "LoadImageToBase64", + "LoadMaskFromURL", + "MaskImageToBase64", + "MaskToBase64", + "MaskToBase64Image", + "SamAutoMaskSEGS" + ], + { + "title_aux": "comfyui-easyapi-nodes" + } + ], + "https://github.com/longgui0318/comfyui-mask-util": [ + [ + "Mask Region Info", + "Mask Selection Of Masks", + "Split Masks" + ], + { + "title_aux": "comfyui-mask-util" + } + ], + "https://github.com/lordgasmic/ComfyUI-Wildcards/raw/master/wildcards.py": [ + [ + "CLIPTextEncodeWithWildcards" + ], + { + "title_aux": "Wildcards" + } + ], + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/SDXLMixSampler.py": [ + [ + "SDXLMixSampler" + ], + { + "title_aux": "ComfyUIJasonNode" + } + ], + "https://github.com/ltdrdata/ComfyUI-Impact-Pack": [ + [ + "AddMask", + "BasicPipeToDetailerPipe", + "BasicPipeToDetailerPipeSDXL", + "BboxDetectorCombined", + "BboxDetectorCombined_v2", + "BboxDetectorForEach", + "BboxDetectorSEGS", + "BitwiseAndMask", + "BitwiseAndMaskForEach", + "CLIPSegDetectorProvider", + "CfgScheduleHookProvider", + "CombineRegionalPrompts", + "CoreMLDetailerHookProvider", + "DenoiseScheduleHookProvider", + "DenoiseSchedulerDetailerHookProvider", + "DetailerForEach", + "DetailerForEachDebug", + "DetailerForEachDebugPipe", + "DetailerForEachPipe", + "DetailerForEachPipeForAnimateDiff", + "DetailerHookCombine", + "DetailerPipeToBasicPipe", + "EditBasicPipe", + "EditDetailerPipe", + "EditDetailerPipeSDXL", + "EmptySegs", + "FaceDetailer", + "FaceDetailerPipe", + "FromBasicPipe", + "FromBasicPipe_v2", + "FromDetailerPipe", + "FromDetailerPipeSDXL", + "FromDetailerPipe_v2", + "ImageListToImageBatch", + "ImageMaskSwitch", + "ImageReceiver", + "ImageSender", + "ImpactAssembleSEGS", + "ImpactCombineConditionings", + "ImpactCompare", + "ImpactConcatConditionings", + "ImpactConditionalBranch", + "ImpactConditionalBranchSelMode", + "ImpactConditionalStopIteration", + "ImpactControlBridge", + "ImpactControlNetApplyAdvancedSEGS", + "ImpactControlNetApplySEGS", + "ImpactControlNetClearSEGS", + "ImpactConvertDataType", + "ImpactDecomposeSEGS", + "ImpactDilateMask", + "ImpactDilateMaskInSEGS", + "ImpactDilate_Mask_SEG_ELT", + "ImpactDummyInput", + "ImpactEdit_SEG_ELT", + "ImpactFloat", + "ImpactFrom_SEG_ELT", + "ImpactGaussianBlurMask", + "ImpactGaussianBlurMaskInSEGS", + "ImpactHFTransformersClassifierProvider", + "ImpactIfNone", + "ImpactImageBatchToImageList", + "ImpactImageInfo", + "ImpactInt", + "ImpactInversedSwitch", + "ImpactIsNotEmptySEGS", + "ImpactKSamplerAdvancedBasicPipe", + "ImpactKSamplerBasicPipe", + "ImpactLatentInfo", + "ImpactLogger", + "ImpactLogicalOperators", + "ImpactMakeImageBatch", + "ImpactMakeImageList", + "ImpactMakeTileSEGS", + "ImpactMinMax", + "ImpactNeg", + "ImpactNodeSetMuteState", + "ImpactQueueTrigger", + "ImpactQueueTriggerCountdown", + "ImpactRemoteBoolean", + "ImpactRemoteInt", + "ImpactSEGSClassify", + "ImpactSEGSConcat", + "ImpactSEGSLabelAssign", + "ImpactSEGSLabelFilter", + "ImpactSEGSOrderedFilter", + "ImpactSEGSPicker", + "ImpactSEGSRangeFilter", + "ImpactSEGSToMaskBatch", + "ImpactSEGSToMaskList", + "ImpactScaleBy_BBOX_SEG_ELT", + "ImpactSegsAndMask", + "ImpactSegsAndMaskForEach", + "ImpactSetWidgetValue", + "ImpactSimpleDetectorSEGS", + "ImpactSimpleDetectorSEGSPipe", + "ImpactSimpleDetectorSEGS_for_AD", + "ImpactSleep", + "ImpactStringSelector", + "ImpactSwitch", + "ImpactValueReceiver", + "ImpactValueSender", + "ImpactWildcardEncode", + "ImpactWildcardProcessor", + "IterativeImageUpscale", + "IterativeLatentUpscale", + "KSamplerAdvancedProvider", + "KSamplerProvider", + "LatentPixelScale", + "LatentReceiver", + "LatentSender", + "LatentSwitch", + "MMDetDetectorProvider", + "MMDetLoader", + "MaskDetailerPipe", + "MaskListToMaskBatch", + "MaskPainter", + "MaskToSEGS", + "MaskToSEGS_for_AnimateDiff", + "MasksToMaskList", + "MediaPipeFaceMeshToSEGS", + "NoiseInjectionDetailerHookProvider", + "NoiseInjectionHookProvider", + "ONNXDetectorProvider", + "ONNXDetectorSEGS", + "PixelKSampleHookCombine", + "PixelKSampleUpscalerProvider", + "PixelKSampleUpscalerProviderPipe", + "PixelTiledKSampleUpscalerProvider", + "PixelTiledKSampleUpscalerProviderPipe", + "PreviewBridge", + "PreviewBridgeLatent", + "PreviewDetailerHookProvider", + "ReencodeLatent", + "ReencodeLatentPipe", + "RegionalPrompt", + "RegionalSampler", + "RegionalSamplerAdvanced", + "RemoveImageFromSEGS", + "RemoveNoiseMask", + "SAMDetectorCombined", + "SAMDetectorSegmented", + "SAMLoader", + "SEGSDetailer", + "SEGSDetailerForAnimateDiff", + "SEGSLabelFilterDetailerHookProvider", + "SEGSOrderedFilterDetailerHookProvider", + "SEGSPaste", + "SEGSPreview", + "SEGSPreviewCNet", + "SEGSRangeFilterDetailerHookProvider", + "SEGSSwitch", + "SEGSToImageList", + "SegmDetectorCombined", + "SegmDetectorCombined_v2", + "SegmDetectorForEach", + "SegmDetectorSEGS", + "Segs Mask", + "Segs Mask ForEach", + "SegsMaskCombine", + "SegsToCombinedMask", + "SetDefaultImageForSEGS", + "StepsScheduleHookProvider", + "SubtractMask", + "SubtractMaskForEach", + "TiledKSamplerProvider", + "ToBasicPipe", + "ToBinaryMask", + "ToDetailerPipe", + "ToDetailerPipeSDXL", + "TwoAdvancedSamplersForMask", + "TwoSamplersForMask", + "TwoSamplersForMaskUpscalerProvider", + "TwoSamplersForMaskUpscalerProviderPipe", + "UltralyticsDetectorProvider", + "UnsamplerDetailerHookProvider", + "UnsamplerHookProvider" + ], + { + "author": "Dr.Lt.Data", + "description": "This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler.", + "nickname": "Impact Pack", + "title": "Impact Pack", + "title_aux": "ComfyUI Impact Pack" + } + ], + "https://github.com/ltdrdata/ComfyUI-Inspire-Pack": [ + [ + "AnimeLineArt_Preprocessor_Provider_for_SEGS //Inspire", + "ApplyRegionalIPAdapters //Inspire", + "BindImageListPromptList //Inspire", + "CLIPTextEncodeWithWeight //Inspire", + "CacheBackendData //Inspire", + "CacheBackendDataList //Inspire", + "CacheBackendDataNumberKey //Inspire", + "CacheBackendDataNumberKeyList //Inspire", + "Canny_Preprocessor_Provider_for_SEGS //Inspire", + "ChangeImageBatchSize //Inspire", + "CheckpointLoaderSimpleShared //Inspire", + "Color_Preprocessor_Provider_for_SEGS //Inspire", + "ConcatConditioningsWithMultiplier //Inspire", + "DWPreprocessor_Provider_for_SEGS //Inspire", + "FakeScribblePreprocessor_Provider_for_SEGS //Inspire", + "FloatRange //Inspire", + "FromIPAdapterPipe //Inspire", + "GlobalSampler //Inspire", + "GlobalSeed //Inspire", + "HEDPreprocessor_Provider_for_SEGS //Inspire", + "HyperTile //Inspire", + "IPAdapterModelHelper //Inspire", + "ImageBatchSplitter //Inspire", + "InpaintPreprocessor_Provider_for_SEGS //Inspire", + "KSampler //Inspire", + "KSamplerAdvanced //Inspire", + "KSamplerAdvancedPipe //Inspire", + "KSamplerAdvancedProgress //Inspire", + "KSamplerPipe //Inspire", + "KSamplerProgress //Inspire", + "LatentBatchSplitter //Inspire", + "LeRes_DepthMap_Preprocessor_Provider_for_SEGS //Inspire", + "LineArt_Preprocessor_Provider_for_SEGS //Inspire", + "ListCounter //Inspire", + "LoadImage //Inspire", + "LoadImageListFromDir //Inspire", + "LoadImagesFromDir //Inspire", + "LoadPromptsFromDir //Inspire", + "LoadPromptsFromFile //Inspire", + "LoadSinglePromptFromFile //Inspire", + "LoraBlockInfo //Inspire", + "LoraLoaderBlockWeight //Inspire", + "MakeBasicPipe //Inspire", + "Manga2Anime_LineArt_Preprocessor_Provider_for_SEGS //Inspire", + "MediaPipeFaceMeshDetectorProvider //Inspire", + "MediaPipe_FaceMesh_Preprocessor_Provider_for_SEGS //Inspire", + "MeshGraphormerDepthMapPreprocessorProvider_for_SEGS //Inspire", + "MiDaS_DepthMap_Preprocessor_Provider_for_SEGS //Inspire", + "OpenPose_Preprocessor_Provider_for_SEGS //Inspire", + "PromptBuilder //Inspire", + "PromptExtractor //Inspire", + "RandomGeneratorForList //Inspire", + "RegionalConditioningColorMask //Inspire", + "RegionalConditioningSimple //Inspire", + "RegionalIPAdapterColorMask //Inspire", + "RegionalIPAdapterEncodedColorMask //Inspire", + "RegionalIPAdapterEncodedMask //Inspire", + "RegionalIPAdapterMask //Inspire", + "RegionalPromptColorMask //Inspire", + "RegionalPromptSimple //Inspire", + "RegionalSeedExplorerColorMask //Inspire", + "RegionalSeedExplorerMask //Inspire", + "RemoveBackendData //Inspire", + "RemoveBackendDataNumberKey //Inspire", + "RemoveControlNet //Inspire", + "RemoveControlNetFromRegionalPrompts //Inspire", + "RetrieveBackendData //Inspire", + "RetrieveBackendDataNumberKey //Inspire", + "SeedExplorer //Inspire", + "ShowCachedInfo //Inspire", + "StableCascade_CheckpointLoader //Inspire", + "TilePreprocessor_Provider_for_SEGS //Inspire", + "ToIPAdapterPipe //Inspire", + "UnzipPrompt //Inspire", + "WildcardEncode //Inspire", + "XY Input: Lora Block Weight //Inspire", + "ZipPrompt //Inspire", + "Zoe_DepthMap_Preprocessor_Provider_for_SEGS //Inspire" + ], + { + "author": "Dr.Lt.Data", + "description": "This extension provides various nodes to support Lora Block Weight and the Impact Pack.", + "nickname": "Inspire Pack", + "nodename_pattern": "Inspire$", + "title": "Inspire Pack", + "title_aux": "ComfyUI Inspire Pack" + } + ], + "https://github.com/m-sokes/ComfyUI-Sokes-Nodes": [ + [ + "Custom Date Format | sokes \ud83e\uddac", + "Latent Switch x9 | sokes \ud83e\uddac" + ], + { + "title_aux": "ComfyUI Sokes Nodes" + } + ], + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/clip-text-encode-split/clip_text_encode_split.py": [ + [ + "RawText", + "RawTextCombine", + "RawTextEncode", + "RawTextReplace" + ], + { + "title_aux": "m957ymj75urz/ComfyUI-Custom-Nodes" + } + ], + "https://github.com/mape/ComfyUI-mape-Helpers": [ + [ + "mape Variable" + ], + { + "author": "mape", + "description": "Various QoL improvements like prompt tweaking, variable assignment, image preview, fuzzy search, error reporting, organizing and node navigation.", + "nickname": "\ud83d\udfe1 mape's helpers", + "title": "mape's helpers", + "title_aux": "mape's ComfyUI Helpers" + } + ], + "https://github.com/marhensa/sdxl-recommended-res-calc": [ + [ + "RecommendedResCalc" + ], + { + "title_aux": "Recommended Resolution Calculator" + } + ], + "https://github.com/martijnat/comfyui-previewlatent": [ + [ + "PreviewLatent", + "PreviewLatentAdvanced", + "PreviewLatentXL" + ], + { + "title_aux": "comfyui-previewlatent" + } + ], + "https://github.com/massao000/ComfyUI_aspect_ratios": [ + [ + "Aspect Ratios Node" + ], + { + "title_aux": "ComfyUI_aspect_ratios" + } + ], + "https://github.com/matan1905/ComfyUI-Serving-Toolkit": [ + [ + "DiscordServing", + "ServingInputNumber", + "ServingInputText", + "ServingOutput", + "WebSocketServing" + ], + { + "title_aux": "ComfyUI Serving toolkit" + } + ], + "https://github.com/mav-rik/facerestore_cf": [ + [ + "CropFace", + "FaceRestoreCFWithModel", + "FaceRestoreModelLoader" + ], + { + "title_aux": "Facerestore CF (Code Former)" + } + ], + "https://github.com/mbrostami/ComfyUI-HF": [ + [ + "GPT2Node" + ], + { + "title_aux": "ComfyUI-HF" + } + ], + "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding": [ + [ + "DynamicThresholdingFull", + "DynamicThresholdingSimple" + ], + { + "title_aux": "Stable Diffusion Dynamic Thresholding (CFG Scale Fix)" + } + ], + "https://github.com/meap158/ComfyUI-Background-Replacement": [ + [ + "BackgroundReplacement", + "ImageComposite" + ], + { + "title_aux": "ComfyUI-Background-Replacement" + } + ], + "https://github.com/meap158/ComfyUI-GPU-temperature-protection": [ + [ + "GPUTemperatureProtection" + ], + { + "title_aux": "GPU temperature protection" + } + ], + "https://github.com/meap158/ComfyUI-Prompt-Expansion": [ + [ + "PromptExpansion" + ], + { + "title_aux": "ComfyUI-Prompt-Expansion" + } + ], + "https://github.com/melMass/comfy_mtb": [ + [ + "Animation Builder (mtb)", + "Any To String (mtb)", + "Batch Float (mtb)", + "Batch Float Assemble (mtb)", + "Batch Float Fill (mtb)", + "Batch Make (mtb)", + "Batch Merge (mtb)", + "Batch Shake (mtb)", + "Batch Shape (mtb)", + "Batch Transform (mtb)", + "Bbox (mtb)", + "Bbox From Mask (mtb)", + "Blur (mtb)", + "Color Correct (mtb)", + "Colored Image (mtb)", + "Concat Images (mtb)", + "Crop (mtb)", + "Debug (mtb)", + "Deep Bump (mtb)", + "Export With Ffmpeg (mtb)", + "Face Swap (mtb)", + "Film Interpolation (mtb)", + "Fit Number (mtb)", + "Float To Number (mtb)", + "Get Batch From History (mtb)", + "Image Compare (mtb)", + "Image Premultiply (mtb)", + "Image Remove Background Rembg (mtb)", + "Image Resize Factor (mtb)", + "Image Tile Offset (mtb)", + "Int To Bool (mtb)", + "Int To Number (mtb)", + "Interpolate Clip Sequential (mtb)", + "Latent Lerp (mtb)", + "Load Face Analysis Model (mtb)", + "Load Face Enhance Model (mtb)", + "Load Face Swap Model (mtb)", + "Load Film Model (mtb)", + "Load Image From Url (mtb)", + "Load Image Sequence (mtb)", + "Mask To Image (mtb)", + "Math Expression (mtb)", + "Model Patch Seamless (mtb)", + "Pick From Batch (mtb)", + "Qr Code (mtb)", + "Restore Face (mtb)", + "Save Gif (mtb)", + "Save Image Grid (mtb)", + "Save Image Sequence (mtb)", + "Save Tensors (mtb)", + "Sharpen (mtb)", + "Smart Step (mtb)", + "Stack Images (mtb)", + "String Replace (mtb)", + "Styles Loader (mtb)", + "Text To Image (mtb)", + "Transform Image (mtb)", + "Uncrop (mtb)", + "Unsplash Image (mtb)", + "Vae Decode (mtb)" + ], + { + "nodename_pattern": "\\(mtb\\)$", + "title_aux": "MTB Nodes" + } + ], + "https://github.com/mihaiiancu/ComfyUI_Inpaint": [ + [ + "InpaintMediapipe" + ], + { + "title_aux": "mihaiiancu/Inpaint" + } + ], + "https://github.com/mikkel/ComfyUI-text-overlay": [ + [ + "Image Text Overlay" + ], + { + "title_aux": "ComfyUI - Text Overlay Plugin" + } + ], + "https://github.com/mikkel/comfyui-mask-boundingbox": [ + [ + "Mask Bounding Box" + ], + { + "title_aux": "ComfyUI - Mask Bounding Box" + } + ], + "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor": [ + [ + "LaMaPreprocessor", + "lamaPreprocessor" + ], + { + "title_aux": "LaMa Preprocessor [WIP]" + } + ], + "https://github.com/modusCell/ComfyUI-dimension-node-modusCell": [ + [ + "DimensionProviderFree modusCell", + "DimensionProviderRatio modusCell", + "String Concat modusCell" + ], + { + "title_aux": "Preset Dimensions" + } + ], + "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt": [ + [ + "Save IMG Prompt" + ], + { + "title_aux": "SaveImgPrompt" + } + ], + "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL": [ + [ + "FastLatentToImage" + ], + { + "title_aux": "ComfyUI_FastVAEDecorder_SDXL" + } + ], + "https://github.com/natto-maki/ComfyUI-NegiTools": [ + [ + "NegiTools_CompositeImages", + "NegiTools_DepthEstimationByMarigold", + "NegiTools_DetectFaceRotationForInpainting", + "NegiTools_ImageProperties", + "NegiTools_LatentProperties", + "NegiTools_NoiseImageGenerator", + "NegiTools_OpenAiDalle3", + "NegiTools_OpenAiGpt", + "NegiTools_OpenAiGpt4v", + "NegiTools_OpenAiTranslate", + "NegiTools_OpenPoseToPointList", + "NegiTools_PointListToMask", + "NegiTools_RandomImageLoader", + "NegiTools_SaveImageToDirectory", + "NegiTools_SeedGenerator", + "NegiTools_StereoImageGenerator", + "NegiTools_StringFunction" + ], + { + "title_aux": "ComfyUI-NegiTools" + } + ], + "https://github.com/nicolai256/comfyUI_Nodes_nicolai256/raw/main/yugioh-presets.py": [ + [ + "yugioh_Presets" + ], + { + "title_aux": "comfyUI_Nodes_nicolai256" + } + ], + "https://github.com/ningxiaoxiao/comfyui-NDI": [ + [ + "NDI_LoadImage", + "NDI_SendImage" + ], + { + "title_aux": "comfyui-NDI" + } + ], + "https://github.com/nkchocoai/ComfyUI-PromptUtilities": [ + [ + "PromptUtilitiesConstString", + "PromptUtilitiesConstStringMultiLine", + "PromptUtilitiesFormatString", + "PromptUtilitiesJoinStringList", + "PromptUtilitiesLoadPreset", + "PromptUtilitiesLoadPresetAdvanced", + "PromptUtilitiesRandomPreset", + "PromptUtilitiesRandomPresetAdvanced" + ], + { + "title_aux": "ComfyUI-PromptUtilities" + } + ], + "https://github.com/nkchocoai/ComfyUI-SaveImageWithMetaData": [ + [ + "SaveImageWithMetaData" + ], + { + "title_aux": "ComfyUI-SaveImageWithMetaData" + } + ], + "https://github.com/nkchocoai/ComfyUI-SizeFromPresets": [ + [ + "EmptyLatentImageFromPresetsSD15", + "EmptyLatentImageFromPresetsSDXL", + "RandomEmptyLatentImageFromPresetsSD15", + "RandomEmptyLatentImageFromPresetsSDXL", + "RandomSizeFromPresetsSD15", + "RandomSizeFromPresetsSDXL", + "SizeFromPresetsSD15", + "SizeFromPresetsSDXL" + ], + { + "title_aux": "ComfyUI-SizeFromPresets" + } + ], + "https://github.com/nkchocoai/ComfyUI-TextOnSegs": [ + [ + "CalcMaxFontSize", + "ExtractDominantColor", + "GetComplementaryColor", + "SegsToRegion", + "TextOnSegsFloodFill" + ], + { + "title_aux": "ComfyUI-TextOnSegs" + } + ], + "https://github.com/noembryo/ComfyUI-noEmbryo": [ + [ + "PromptTermList1", + "PromptTermList2", + "PromptTermList3", + "PromptTermList4", + "PromptTermList5", + "PromptTermList6" + ], + { + "author": "noEmbryo", + "description": "Some useful nodes for ComfyUI", + "nickname": "noEmbryo", + "title": "noEmbryo nodes for ComfyUI", + "title_aux": "noEmbryo nodes" + } + ], + "https://github.com/nosiu/comfyui-instantId-faceswap": [ + [ + "FaceEmbed", + "FaceSwapGenerationInpaint", + "FaceSwapSetupPipeline", + "LCMLora" + ], + { + "title_aux": "ComfyUI InstantID Faceswapper" + } + ], + "https://github.com/noxinias/ComfyUI_NoxinNodes": [ + [ + "NoxinChime", + "NoxinPromptLoad", + "NoxinPromptSave", + "NoxinScaledResolution", + "NoxinSimpleMath", + "NoxinSplitPrompt" + ], + { + "title_aux": "ComfyUI_NoxinNodes" + } + ], + "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge": [ + [ + "Apply LoRA", + "DARE Merge LoRA Stack", + "Save LoRA" + ], + { + "title_aux": "ComfyUI - Apply LoRA Stacker with DARE" + } + ], + "https://github.com/ntdviet/comfyui-ext/raw/main/custom_nodes/gcLatentTunnel/gcLatentTunnel.py": [ + [ + "gcLatentTunnel" + ], + { + "title_aux": "ntdviet/comfyui-ext" + } + ], + "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92": [ + [ + "CLIPStringEncode _O", + "Chat completion _O", + "ChatGPT Simple _O", + "ChatGPT _O", + "ChatGPT compact _O", + "Chat_Completion _O", + "Chat_Message _O", + "Chat_Message_fromString _O", + "Concat Text _O", + "ConcatRandomNSP_O", + "Debug String _O", + "Debug Text _O", + "Debug Text route _O", + "Edit_image _O", + "Equation1param _O", + "Equation2params _O", + "GetImage_(Width&Height) _O", + "GetLatent_(Width&Height) _O", + "ImageScaleFactor _O", + "ImageScaleFactorSimple _O", + "LatentUpscaleFactor _O", + "LatentUpscaleFactorSimple _O", + "LatentUpscaleMultiply", + "Note _O", + "RandomNSP _O", + "Replace Text _O", + "String _O", + "Text _O", + "Text2Image _O", + "Trim Text _O", + "VAEDecodeParallel _O", + "combine_chat_messages _O", + "compine_chat_messages _O", + "concat Strings _O", + "create image _O", + "create_image _O", + "debug Completeion _O", + "debug messages_O", + "float _O", + "floatToInt _O", + "floatToText _O", + "int _O", + "intToFloat _O", + "load_openAI _O", + "replace String _O", + "replace String advanced _O", + "saveTextToFile _O", + "seed _O", + "selectLatentFromBatch _O", + "string2Image _O", + "trim String _O", + "variation_image _O" + ], + { + "title_aux": "Quality of life Suit:V2" + } + ], + "https://github.com/ostris/ostris_nodes_comfyui": [ + [ + "LLM Pipe Loader - Ostris", + "LLM Prompt Upsampling - Ostris", + "One Seed - Ostris", + "Text Box - Ostris" + ], + { + "nodename_pattern": "- Ostris$", + "title_aux": "Ostris Nodes ComfyUI" + } + ], + "https://github.com/ownimage/ComfyUI-ownimage": [ + [ + "Caching Image Loader" + ], + { + "title_aux": "ComfyUI-ownimage" + } + ], + "https://github.com/oyvindg/ComfyUI-TrollSuite": [ + [ + "BinaryImageMask", + "ImagePadding", + "LoadLastImage", + "RandomMask", + "TransparentImage" + ], + { + "title_aux": "ComfyUI-TrollSuite" + } + ], + "https://github.com/palant/extended-saveimage-comfyui": [ + [ + "SaveImageExtended" + ], + { + "title_aux": "Extended Save Image for ComfyUI" + } + ], + "https://github.com/palant/image-resize-comfyui": [ + [ + "ImageResize" + ], + { + "title_aux": "Image Resize for ComfyUI" + } + ], + "https://github.com/pants007/comfy-pants": [ + [ + "CLIPTextEncodeAIO", + "Image Make Square" + ], + { + "title_aux": "pants" + } + ], + "https://github.com/paulo-coronado/comfy_clip_blip_node": [ + [ + "CLIPTextEncodeBLIP", + "CLIPTextEncodeBLIP-2", + "Example" + ], + { + "title_aux": "comfy_clip_blip_node" + } + ], + "https://github.com/picturesonpictures/comfy_PoP": [ + [ + "AdaptiveCannyDetector_PoP", + "AnyAspectRatio", + "ConditioningMultiplier_PoP", + "ConditioningNormalizer_PoP", + "DallE3_PoP", + "LoadImageResizer_PoP", + "LoraStackLoader10_PoP", + "LoraStackLoader_PoP", + "VAEDecoderPoP", + "VAEEncoderPoP" + ], + { + "title_aux": "comfy_PoP" + } + ], + "https://github.com/pkpkTech/ComfyUI-SaveAVIF": [ + [ + "SaveAvif" + ], + { + "title_aux": "ComfyUI-SaveAVIF" + } + ], + "https://github.com/pkpkTech/ComfyUI-TemporaryLoader": [ + [ + "LoadTempCheckpoint", + "LoadTempLoRA", + "LoadTempMultiLoRA" + ], + { + "title_aux": "ComfyUI-TemporaryLoader" + } + ], + "https://github.com/pythongosssss/ComfyUI-Custom-Scripts": [ + [ + "CheckpointLoader|pysssss", + "ConstrainImageforVideo|pysssss", + "ConstrainImage|pysssss", + "LoadText|pysssss", + "LoraLoader|pysssss", + "MathExpression|pysssss", + "MultiPrimitive|pysssss", + "PlaySound|pysssss", + "Repeater|pysssss", + "ReroutePrimitive|pysssss", + "SaveText|pysssss", + "ShowText|pysssss", + "StringFunction|pysssss" + ], + { + "title_aux": "pythongosssss/ComfyUI-Custom-Scripts" + } + ], + "https://github.com/pythongosssss/ComfyUI-WD14-Tagger": [ + [ + "WD14Tagger|pysssss" + ], + { + "title_aux": "ComfyUI WD 1.4 Tagger" + } + ], + "https://github.com/ramyma/A8R8_ComfyUI_nodes": [ + [ + "Base64ImageInput", + "Base64ImageOutput" + ], + { + "title_aux": "A8R8 ComfyUI Nodes" + } + ], + "https://github.com/rcfcu2000/zhihuige-nodes-comfyui": [ + [ + "Combine ZHGMasks", + "Cover ZHGMasks", + "From ZHG pip", + "GroundingDinoModelLoader (zhihuige)", + "GroundingDinoPIPESegment (zhihuige)", + "GroundingDinoSAMSegment (zhihuige)", + "InvertMask (zhihuige)", + "SAMModelLoader (zhihuige)", + "To ZHG pip", + "ZHG FaceIndex", + "ZHG GetMaskArea", + "ZHG Image Levels", + "ZHG SaveImage", + "ZHG SmoothEdge", + "ZHG UltimateSDUpscale" + ], + { + "title_aux": "zhihuige-nodes-comfyui" + } + ], + "https://github.com/rcsaquino/comfyui-custom-nodes": [ + [ + "BackgroundRemover | rcsaquino", + "VAELoader | rcsaquino", + "VAEProcessor | rcsaquino" + ], + { + "title_aux": "rcsaquino/comfyui-custom-nodes" + } + ], + "https://github.com/receyuki/comfyui-prompt-reader-node": [ + [ + "SDBatchLoader", + "SDLoraLoader", + "SDLoraSelector", + "SDParameterExtractor", + "SDParameterGenerator", + "SDPromptMerger", + "SDPromptReader", + "SDPromptSaver", + "SDTypeConverter" + ], + { + "author": "receyuki", + "description": "ComfyUI node version of the SD Prompt Reader", + "nickname": "SD Prompt Reader", + "title": "SD Prompt Reader", + "title_aux": "comfyui-prompt-reader-node" + } + ], + "https://github.com/redhottensors/ComfyUI-Prediction": [ + [ + "AvoidErasePrediction", + "CFGPrediction", + "CombinePredictions", + "ConditionedPrediction", + "PerpNegPrediction", + "SamplerCustomPrediction", + "ScalePrediction", + "ScaledGuidancePrediction" + ], + { + "author": "RedHotTensors", + "description": "Fully customizable Classifer Free Guidance for ComfyUI", + "nickname": "ComfyUI-Prediction", + "title": "ComfyUI-Prediction", + "title_aux": "ComfyUI-Prediction" + } + ], + "https://github.com/rgthree/rgthree-comfy": [ + [], + { + "author": "rgthree", + "description": "A bunch of nodes I created that I also find useful.", + "nickname": "rgthree", + "nodename_pattern": " \\(rgthree\\)$", + "title": "Comfy Nodes", + "title_aux": "rgthree's ComfyUI Nodes" + } + ], + "https://github.com/richinsley/Comfy-LFO": [ + [ + "LFO_Pulse", + "LFO_Sawtooth", + "LFO_Sine", + "LFO_Square", + "LFO_Triangle" + ], + { + "title_aux": "Comfy-LFO" + } + ], + "https://github.com/ricklove/comfyui-ricklove": [ + [ + "RL_Crop_Resize", + "RL_Crop_Resize_Batch", + "RL_Depth16", + "RL_Finetune_Analyze", + "RL_Finetune_Analyze_Batch", + "RL_Finetune_Variable", + "RL_Image_Shadow", + "RL_Image_Threshold_Channels", + "RL_Internet_Search", + "RL_LoadImageSequence", + "RL_Optical_Flow_Dip", + "RL_SaveImageSequence", + "RL_Uncrop", + "RL_Warp_Image", + "RL_Zoe_Depth_Map_Preprocessor", + "RL_Zoe_Depth_Map_Preprocessor_Raw_Infer", + "RL_Zoe_Depth_Map_Preprocessor_Raw_Process" + ], + { + "title_aux": "comfyui-ricklove" + } + ], + "https://github.com/rklaffehn/rk-comfy-nodes": [ + [ + "RK_CivitAIAddHashes", + "RK_CivitAIMetaChecker" + ], + { + "title_aux": "rk-comfy-nodes" + } + ], + "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata": [ + [ + "SetMetadataAll", + "SetMetadataString" + ], + { + "title_aux": "ComfyUI PNG Metadata" + } + ], + "https://github.com/rui40000/RUI-Nodes": [ + [ + "ABCondition", + "CharacterCount" + ], + { + "title_aux": "RUI-Nodes" + } + ], + "https://github.com/s1dlx/comfy_meh/raw/main/meh.py": [ + [ + "MergingExecutionHelper" + ], + { + "title_aux": "comfy_meh" + } + ], + "https://github.com/seanlynch/comfyui-optical-flow": [ + [ + "Apply optical flow", + "Compute optical flow", + "Visualize optical flow" + ], + { + "title_aux": "ComfyUI Optical Flow" + } + ], + "https://github.com/seanlynch/srl-nodes": [ + [ + "SRL Conditional Interrrupt", + "SRL Eval", + "SRL Filter Image List", + "SRL Format String" + ], + { + "title_aux": "SRL's nodes" + } + ], + "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack": [ + [ + "ImageResizeAndCropNode", + "ImageSquareAdapterNode" + ], + { + "title_aux": "ComfyUI_Nimbus-Pack" + } + ], + "https://github.com/shadowcz007/comfyui-consistency-decoder": [ + [ + "VAEDecodeConsistencyDecoder", + "VAELoaderConsistencyDecoder" + ], + { + "title_aux": "Consistency Decoder" + } + ], + "https://github.com/shadowcz007/comfyui-mixlab-nodes": [ + [ + "3DImage", + "AppInfo", + "AreaToMask", + "CenterImage", + "CharacterInText", + "ChatGPTOpenAI", + "CkptNames_", + "Color", + "DynamicDelayProcessor", + "EmbeddingPrompt", + "EnhanceImage", + "FaceToMask", + "FeatheredMask", + "FloatSlider", + "FloatingVideo", + "Font", + "GamePal", + "GetImageSize_", + "GradientImage", + "GridOutput", + "ImageColorTransfer", + "ImageCropByAlpha", + "IntNumber", + "JoinWithDelimiter", + "LaMaInpainting", + "LimitNumber", + "LoadImagesFromPath", + "LoadImagesFromURL", + "LoraNames_", + "MergeLayers", + "MirroredImage", + "MultiplicationNode", + "NewLayer", + "NoiseImage", + "OutlineMask", + "PromptImage", + "PromptSimplification", + "PromptSlide", + "RandomPrompt", + "ResizeImageMixlab", + "SamplerNames_", + "SaveImageToLocal", + "ScreenShare", + "Seed_", + "ShowLayer", + "ShowTextForGPT", + "SmoothMask", + "SpeechRecognition", + "SpeechSynthesis", + "SplitImage", + "SplitLongMask", + "SvgImage", + "SwitchByIndex", + "TESTNODE_", + "TESTNODE_TOKEN", + "TextImage", + "TextInput_", + "TextToNumber", + "TransparentImage", + "VAEDecodeConsistencyDecoder", + "VAELoaderConsistencyDecoder" + ], + { + "title_aux": "comfyui-mixlab-nodes" + } + ], + "https://github.com/shadowcz007/comfyui-ultralytics-yolo": [ + [ + "DetectByLabel" + ], + { + "title_aux": "comfyui-ultralytics-yolo" + } + ], + "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus": [ + [ + "PhotoMakerEncodePlus", + "PhotoMakerStyles", + "PrepImagesForClipVisionFromPath" + ], + { + "title_aux": "ComfyUI PhotoMaker Plus" + } + ], + "https://github.com/shiimizu/ComfyUI-TiledDiffusion": [ + [ + "NoiseInversion", + "TiledDiffusion", + "VAEDecodeTiled_TiledDiffusion", + "VAEEncodeTiled_TiledDiffusion" + ], + { + "title_aux": "Tiled Diffusion & VAE for ComfyUI" + } + ], + "https://github.com/shiimizu/ComfyUI_smZNodes": [ + [ + "smZ CLIPTextEncode", + "smZ Settings" + ], + { + "title_aux": "smZNodes" + } + ], + "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage": [ + [ + "SDXL Empty Latent Image" + ], + { + "title_aux": "ComfyUI-SDXL-EmptyLatentImage" + } + ], + "https://github.com/shingo1228/ComfyUI-send-eagle-slim": [ + [ + "Send Eagle with text", + "Send Webp Image to Eagle" + ], + { + "title_aux": "ComfyUI-send-Eagle(slim)" + } + ], + "https://github.com/shockz0rz/ComfyUI_InterpolateEverything": [ + [ + "OpenposePreprocessorInterpolate" + ], + { + "title_aux": "InterpolateEverything" + } + ], + "https://github.com/shockz0rz/comfy-easy-grids": [ + [ + "FloatToText", + "GridFloatList", + "GridFloats", + "GridIntList", + "GridInts", + "GridLoras", + "GridStringList", + "GridStrings", + "ImageGridCommander", + "IntToText", + "SaveImageGrid", + "TextConcatenator" + ], + { + "title_aux": "comfy-easy-grids" + } + ], + "https://github.com/siliconflow/onediff_comfy_nodes": [ + [ + "CompareModel", + "ControlNetGraphLoader", + "ControlNetGraphSaver", + "ControlNetSpeedup", + "ModelGraphLoader", + "ModelGraphSaver", + "ModelSpeedup", + "ModuleDeepCacheSpeedup", + "OneDiffCheckpointLoaderSimple", + "SVDSpeedup", + "ShowImageDiff", + "VaeGraphLoader", + "VaeGraphSaver", + "VaeSpeedup" + ], + { + "title_aux": "OneDiff Nodes" + } + ], + "https://github.com/sipherxyz/comfyui-art-venture": [ + [ + "AV_CheckpointMerge", + "AV_CheckpointModelsToParametersPipe", + "AV_CheckpointSave", + "AV_ControlNetEfficientLoader", + "AV_ControlNetEfficientLoaderAdvanced", + "AV_ControlNetEfficientStacker", + "AV_ControlNetEfficientStackerSimple", + "AV_ControlNetLoader", + "AV_ControlNetPreprocessor", + "AV_LoraListLoader", + "AV_LoraListStacker", + "AV_LoraLoader", + "AV_ParametersPipeToCheckpointModels", + "AV_ParametersPipeToPrompts", + "AV_PromptsToParametersPipe", + "AV_SAMLoader", + "AV_VAELoader", + "AspectRatioSelector", + "BLIPCaption", + "BLIPLoader", + "BooleanPrimitive", + "ColorBlend", + "ColorCorrect", + "DeepDanbooruCaption", + "DependenciesEdit", + "Fooocus_KSampler", + "Fooocus_KSamplerAdvanced", + "GetBoolFromJson", + "GetFloatFromJson", + "GetIntFromJson", + "GetObjectFromJson", + "GetSAMEmbedding", + "GetTextFromJson", + "ISNetLoader", + "ISNetSegment", + "ImageAlphaComposite", + "ImageApplyChannel", + "ImageExtractChannel", + "ImageGaussianBlur", + "ImageMuxer", + "ImageRepeat", + "ImageScaleDown", + "ImageScaleDownBy", + "ImageScaleDownToSize", + "ImageScaleToMegapixels", + "LaMaInpaint", + "LoadImageAsMaskFromUrl", + "LoadImageFromUrl", + "LoadJsonFromUrl", + "MergeModels", + "NumberScaler", + "OverlayInpaintedImage", + "OverlayInpaintedLatent", + "PrepareImageAndMaskForInpaint", + "QRCodeGenerator", + "RandomFloat", + "RandomInt", + "SAMEmbeddingToImage", + "SDXLAspectRatioSelector", + "SDXLPromptStyler", + "SeedSelector", + "StringToInt", + "StringToNumber" + ], + { + "title_aux": "comfyui-art-venture" + } + ], + "https://github.com/skfoo/ComfyUI-Coziness": [ + [ + "LoraTextExtractor-b1f83aa2", + "MultiLoraLoader-70bf3d77" + ], + { + "title_aux": "ComfyUI-Coziness" + } + ], + "https://github.com/smagnetize/kb-comfyui-nodes": [ + [ + "SingleImageDataUrlLoader" + ], + { + "title_aux": "kb-comfyui-nodes" + } + ], + "https://github.com/space-nuko/ComfyUI-Disco-Diffusion": [ + [ + "DiscoDiffusion_DiscoDiffusion", + "DiscoDiffusion_DiscoDiffusionExtraSettings", + "DiscoDiffusion_GuidedDiffusionLoader", + "DiscoDiffusion_OpenAICLIPLoader" + ], + { + "title_aux": "Disco Diffusion" + } + ], + "https://github.com/space-nuko/ComfyUI-OpenPose-Editor": [ + [ + "Nui.OpenPoseEditor" + ], + { + "title_aux": "OpenPose Editor" + } + ], + "https://github.com/space-nuko/nui-suite": [ + [ + "Nui.DynamicPromptsTextGen", + "Nui.FeelingLuckyTextGen", + "Nui.OutputString" + ], + { + "title_aux": "nui suite" + } + ], + "https://github.com/spacepxl/ComfyUI-HQ-Image-Save": [ + [ + "LoadEXR", + "LoadLatentEXR", + "SaveEXR", + "SaveLatentEXR", + "SaveTiff" + ], + { + "title_aux": "ComfyUI-HQ-Image-Save" + } + ], + "https://github.com/spacepxl/ComfyUI-Image-Filters": [ + [ + "AdainImage", + "AdainLatent", + "AlphaClean", + "AlphaMatte", + "BatchAlign", + "BatchAverageImage", + "BatchAverageUnJittered", + "BatchNormalizeImage", + "BatchNormalizeLatent", + "BlurImageFast", + "BlurMaskFast", + "ClampOutliers", + "ConvertNormals", + "DifferenceChecker", + "DilateErodeMask", + "EnhanceDetail", + "ExposureAdjust", + "GuidedFilterAlpha", + "ImageConstant", + "ImageConstantHSV", + "JitterImage", + "Keyer", + "LatentStats", + "NormalMapSimple", + "OffsetLatentImage", + "RemapRange", + "Tonemap", + "UnJitterImage", + "UnTonemap" + ], + { + "title_aux": "ComfyUI-Image-Filters" + } + ], + "https://github.com/spacepxl/ComfyUI-RAVE": [ + [ + "ConditioningDebug", + "ImageGridCompose", + "ImageGridDecompose", + "KSamplerRAVE", + "LatentGridCompose", + "LatentGridDecompose" + ], + { + "title_aux": "ComfyUI-RAVE" + } + ], + "https://github.com/spinagon/ComfyUI-seam-carving": [ + [ + "SeamCarving" + ], + { + "title_aux": "ComfyUI-seam-carving" + } + ], + "https://github.com/spinagon/ComfyUI-seamless-tiling": [ + [ + "CircularVAEDecode", + "MakeCircularVAE", + "OffsetImage", + "SeamlessTile" + ], + { + "title_aux": "Seamless tiling Node for ComfyUI" + } + ], + "https://github.com/spro/comfyui-mirror": [ + [ + "LatentMirror" + ], + { + "title_aux": "Latent Mirror node for ComfyUI" + } + ], + "https://github.com/ssitu/ComfyUI_UltimateSDUpscale": [ + [ + "UltimateSDUpscale", + "UltimateSDUpscaleNoUpscale" + ], + { + "title_aux": "UltimateSDUpscale" + } + ], + "https://github.com/ssitu/ComfyUI_fabric": [ + [ + "FABRICPatchModel", + "FABRICPatchModelAdv", + "KSamplerAdvFABRICAdv", + "KSamplerFABRIC", + "KSamplerFABRICAdv" + ], + { + "title_aux": "ComfyUI fabric" + } + ], + "https://github.com/ssitu/ComfyUI_restart_sampling": [ + [ + "KRestartSampler", + "KRestartSamplerAdv", + "KRestartSamplerSimple" + ], + { + "title_aux": "Restart Sampling" + } + ], + "https://github.com/ssitu/ComfyUI_roop": [ + [ + "RoopImproved", + "roop" + ], + { + "title_aux": "ComfyUI roop" + } + ], + "https://github.com/storyicon/comfyui_segment_anything": [ + [ + "GroundingDinoModelLoader (segment anything)", + "GroundingDinoSAMSegment (segment anything)", + "InvertMask (segment anything)", + "IsMaskEmpty", + "SAMModelLoader (segment anything)" + ], + { + "title_aux": "segment anything" + } + ], + "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score": [ + [ + "AesthetlcScoreSorter", + "CalculateAestheticScore", + "LoadAesteticModel", + "ScoreToNumber" + ], + { + "title_aux": "ComfyUI_Strimmlarns_aesthetic_score" + } + ], + "https://github.com/styler00dollar/ComfyUI-deepcache": [ + [ + "DeepCache" + ], + { + "title_aux": "ComfyUI-deepcache" + } + ], + "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale": [ + [ + "SudoLatentUpscale" + ], + { + "title_aux": "ComfyUI-sudo-latent-upscale" + } + ], + "https://github.com/syllebra/bilbox-comfyui": [ + [ + "BilboXLut", + "BilboXPhotoPrompt", + "BilboXVignette" + ], + { + "title_aux": "BilboX's ComfyUI Custom Nodes" + } + ], + "https://github.com/sylym/comfy_vid2vid": [ + [ + "CheckpointLoaderSimpleSequence", + "DdimInversionSequence", + "KSamplerSequence", + "LoadImageMaskSequence", + "LoadImageSequence", + "LoraLoaderSequence", + "SetLatentNoiseSequence", + "TrainUnetSequence", + "VAEEncodeForInpaintSequence" + ], + { + "title_aux": "Vid2vid" + } + ], + "https://github.com/szhublox/ambw_comfyui": [ + [ + "Auto Merge Block Weighted", + "CLIPMergeSimple", + "CheckpointSave", + "ModelMergeBlocks", + "ModelMergeSimple" + ], + { + "title_aux": "Auto-MBW" + } + ], + "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes/raw/main/SyrianFalconNodes.py": [ + [ + "CompositeImage", + "KSamplerAlternate", + "KSamplerPromptEdit", + "KSamplerPromptEditAndAlternate", + "LoopBack", + "QRGenerate", + "WordAsImage" + ], + { + "title_aux": "Syrian Falcon Nodes" + } + ], + "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy": [ + [ + "ComfyNodesToSaveCanvas", + "FloatNumber", + "FreeU_LCM", + "ImageOutputToComfyNodes", + "ImageShuffle", + "ImageSwitch", + "LCMGenerate", + "LCMGenerate_ReferenceOnly", + "LCMGenerate_SDTurbo", + "LCMGenerate_img2img", + "LCMGenerate_img2img_IPAdapter", + "LCMGenerate_img2img_controlnet", + "LCMGenerate_inpaintv2", + "LCMGenerate_inpaintv3", + "LCMLoader", + "LCMLoader_RefInpaint", + "LCMLoader_ReferenceOnly", + "LCMLoader_SDTurbo", + "LCMLoader_controlnet", + "LCMLoader_controlnet_inpaint", + "LCMLoader_img2img", + "LCMLoraLoader_inpaint", + "LCMLoraLoader_ipadapter", + "LCMLora_inpaint", + "LCMLora_ipadapter", + "LCMT2IAdapter", + "LCM_IPAdapter", + "LCM_IPAdapter_inpaint", + "LCM_outpaint_prep", + "LoadImageNode_LCM", + "Loader_SegmindVega", + "OutpaintCanvasTool", + "SaveImage_Canvas", + "SaveImage_LCM", + "SaveImage_Puzzle", + "SaveImage_PuzzleV2", + "SegmindVega", + "SettingsSwitch", + "stitch" + ], + { + "title_aux": "LCM_Inpaint-Outpaint_Comfy" + } + ], + "https://github.com/talesofai/comfyui-browser": [ + [ + "DifyTextGenerator //Browser", + "LoadImageByUrl //Browser", + "SelectInputs //Browser", + "UploadToRemote //Browser", + "XyzPlot //Browser" + ], + { + "title_aux": "ComfyUI Browser" + } + ], + "https://github.com/theUpsider/ComfyUI-Logic": [ + [ + "Bool", + "Compare", + "DebugPrint", + "Float", + "If ANY execute A else B", + "Int", + "String" + ], + { + "title_aux": "ComfyUI-Logic" + } + ], + "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader": [ + [ + "Load Styles CSV" + ], + { + "title_aux": "Styles CSV Loader Extension for ComfyUI" + } + ], + "https://github.com/thecooltechguy/ComfyUI-MagicAnimate": [ + [ + "MagicAnimate", + "MagicAnimateModelLoader" + ], + { + "title_aux": "ComfyUI-MagicAnimate" + } + ], + "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion": [ + [ + "SVDDecoder", + "SVDModelLoader", + "SVDSampler", + "SVDSimpleImg2Vid" + ], + { + "title_aux": "ComfyUI Stable Video Diffusion" + } + ], + "https://github.com/thedyze/save-image-extended-comfyui": [ + [ + "SaveImageExtended" + ], + { + "title_aux": "Save Image Extended for ComfyUI" + } + ], + "https://github.com/tocubed/ComfyUI-AudioReactor": [ + [ + "AudioFrameTransformBeats", + "AudioFrameTransformShadertoy", + "AudioLoadPath", + "Shadertoy" + ], + { + "title_aux": "ComfyUI-AudioReactor" + } + ], + "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes": [ + [ + "CaptureWebcam", + "LatentDelay", + "LoadWebcamImage", + "SaveImagetoPath" + ], + { + "title_aux": "ComfyUI_toyxyz_test_nodes" + } + ], + "https://github.com/trojblue/trNodes": [ + [ + "JpgConvertNode", + "trColorCorrection", + "trLayering", + "trRouter", + "trRouterLonger" + ], + { + "title_aux": "trNodes" + } + ], + "https://github.com/trumanwong/ComfyUI-NSFW-Detection": [ + [ + "NSFWDetection" + ], + { + "title_aux": "ComfyUI-NSFW-Detection" + } + ], + "https://github.com/ttulttul/ComfyUI-Iterative-Mixer": [ + [ + "Batch Unsampler", + "Iterative Mixing KSampler", + "Iterative Mixing KSampler Advanced", + "IterativeMixingSampler", + "IterativeMixingScheduler", + "IterativeMixingSchedulerAdvanced", + "Latent Batch Comparison Plot", + "Latent Batch Statistics Plot", + "MixingMaskGenerator" + ], + { + "title_aux": "ComfyUI Iterative Mixing Nodes" + } + ], + "https://github.com/ttulttul/ComfyUI-Tensor-Operations": [ + [ + "Image Match Normalize", + "Latent Match Normalize" + ], + { + "title_aux": "ComfyUI-Tensor-Operations" + } + ], + "https://github.com/tudal/Hakkun-ComfyUI-nodes/raw/main/hakkun_nodes.py": [ + [ + "Any Converter", + "Calculate Upscale", + "Image Resize To Height", + "Image Resize To Width", + "Image size to string", + "Load Random Image", + "Load Text", + "Multi Text Merge", + "Prompt Parser", + "Random Line", + "Random Line 4" + ], + { + "title_aux": "Hakkun-ComfyUI-nodes" + } + ], + "https://github.com/tusharbhutt/Endless-Nodes": [ + [ + "ESS Aesthetic Scoring", + "ESS Aesthetic Scoring Auto", + "ESS Combo Parameterizer", + "ESS Combo Parameterizer & Prompts", + "ESS Eight Input Random", + "ESS Eight Input Text Switch", + "ESS Float to Integer", + "ESS Float to Number", + "ESS Float to String", + "ESS Float to X", + "ESS Global Envoy", + "ESS Image Reward", + "ESS Image Reward Auto", + "ESS Image Saver with JSON", + "ESS Integer to Float", + "ESS Integer to Number", + "ESS Integer to String", + "ESS Integer to X", + "ESS Number to Float", + "ESS Number to Integer", + "ESS Number to String", + "ESS Number to X", + "ESS Parameterizer", + "ESS Parameterizer & Prompts", + "ESS Six Float Output", + "ESS Six Input Random", + "ESS Six Input Text Switch", + "ESS Six Integer IO Switch", + "ESS Six Integer IO Widget", + "ESS String to Float", + "ESS String to Integer", + "ESS String to Num", + "ESS String to X", + "\u267e\ufe0f\ud83c\udf0a\u2728 Image Saver with JSON" + ], + { + "author": "BiffMunky", + "description": "A small set of nodes I created for various numerical and text inputs. Features image saver with ability to have JSON saved to separate folder, parameter collection nodes, two aesthetic scoring models, switches for text and numbers, and conversion of string to numeric and vice versa.", + "nickname": "\u267e\ufe0f\ud83c\udf0a\u2728", + "title": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes", + "title_aux": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes" + } + ], + "https://github.com/twri/sdxl_prompt_styler": [ + [ + "SDXLPromptStyler", + "SDXLPromptStylerAdvanced" + ], + { + "title_aux": "SDXL Prompt Styler" + } + ], + "https://github.com/uarefans/ComfyUI-Fans": [ + [ + "Fans Prompt Styler Negative", + "Fans Prompt Styler Positive", + "Fans Styler", + "Fans Text Concatenate" + ], + { + "title_aux": "ComfyUI-Fans" + } + ], + "https://github.com/vanillacode314/SimpleWildcardsComfyUI": [ + [ + "SimpleConcat", + "SimpleWildcard" + ], + { + "author": "VanillaCode314", + "description": "A simple wildcard node for ComfyUI. Can also be used a style prompt node.", + "nickname": "Simple Wildcard", + "title": "Simple Wildcard", + "title_aux": "Simple Wildcard" + } + ], + "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration": [ + [ + "ChatGptPrompt" + ], + { + "title_aux": "ComfyUI-Chat-GPT-Integration" + } + ], + "https://github.com/violet-chen/comfyui-psd2png": [ + [ + "Psd2Png" + ], + { + "title_aux": "comfyui-psd2png" + } + ], + "https://github.com/wallish77/wlsh_nodes": [ + [ + "Alternating KSampler (WLSH)", + "Build Filename String (WLSH)", + "CLIP +/- w/Text Unified (WLSH)", + "CLIP Positive-Negative (WLSH)", + "CLIP Positive-Negative XL (WLSH)", + "CLIP Positive-Negative XL w/Text (WLSH)", + "CLIP Positive-Negative w/Text (WLSH)", + "Checkpoint Loader w/Name (WLSH)", + "Empty Latent by Pixels (WLSH)", + "Empty Latent by Ratio (WLSH)", + "Empty Latent by Size (WLSH)", + "Generate Border Mask (WLSH)", + "Grayscale Image (WLSH)", + "Image Load with Metadata (WLSH)", + "Image Save with Prompt (WLSH)", + "Image Save with Prompt File (WLSH)", + "Image Save with Prompt/Info (WLSH)", + "Image Save with Prompt/Info File (WLSH)", + "Image Scale By Factor (WLSH)", + "Image Scale by Shortside (WLSH)", + "KSamplerAdvanced (WLSH)", + "Multiply Integer (WLSH)", + "Outpaint to Image (WLSH)", + "Prompt Weight (WLSH)", + "Quick Resolution Multiply (WLSH)", + "Resolutions by Ratio (WLSH)", + "SDXL Quick Empty Latent (WLSH)", + "SDXL Quick Image Scale (WLSH)", + "SDXL Resolutions (WLSH)", + "SDXL Steps (WLSH)", + "Save Positive Prompt(WLSH)", + "Save Prompt (WLSH)", + "Save Prompt/Info (WLSH)", + "Seed and Int (WLSH)", + "Seed to Number (WLSH)", + "Simple Pattern Replace (WLSH)", + "Simple String Combine (WLSH)", + "Time String (WLSH)", + "Upscale by Factor with Model (WLSH)", + "VAE Encode for Inpaint w/Padding (WLSH)" + ], + { + "title_aux": "wlsh_nodes" + } + ], + "https://github.com/whatbirdisthat/cyberdolphin": [ + [ + "\ud83d\udc2c Gradio ChatInterface", + "\ud83d\udc2c OpenAI Advanced", + "\ud83d\udc2c OpenAI Compatible", + "\ud83d\udc2c OpenAI DALL\u00b7E", + "\ud83d\udc2c OpenAI Simple" + ], + { + "title_aux": "cyberdolphin" + } + ], + "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus": [ + [ + "CDL.OpenPoseEditorPlus" + ], + { + "title_aux": "ComfyUI-Openpose-Editor-Plus" + } + ], + "https://github.com/wmatson/easy-comfy-nodes": [ + [ + "EZAssocDictNode", + "EZAssocImgNode", + "EZAssocStrNode", + "EZEmptyDictNode", + "EZHttpPostNode", + "EZLoadImgBatchFromUrlsNode", + "EZLoadImgFromUrlNode", + "EZRemoveImgBackground", + "EZS3Uploader", + "EZVideoCombiner" + ], + { + "title_aux": "easy-comfy-nodes" + } + ], + "https://github.com/wolfden/ComfyUi_PromptStylers": [ + [ + "SDXLPromptStylerAll", + "SDXLPromptStylerHorror", + "SDXLPromptStylerMisc", + "SDXLPromptStylerbyArtist", + "SDXLPromptStylerbyCamera", + "SDXLPromptStylerbyComposition", + "SDXLPromptStylerbyCyberpunkSurrealism", + "SDXLPromptStylerbyDepth", + "SDXLPromptStylerbyEnvironment", + "SDXLPromptStylerbyFantasySetting", + "SDXLPromptStylerbyFilter", + "SDXLPromptStylerbyFocus", + "SDXLPromptStylerbyImpressionism", + "SDXLPromptStylerbyLighting", + "SDXLPromptStylerbyMileHigh", + "SDXLPromptStylerbyMood", + "SDXLPromptStylerbyMythicalCreature", + "SDXLPromptStylerbyOriginal", + "SDXLPromptStylerbyQuantumRealism", + "SDXLPromptStylerbySteamPunkRealism", + "SDXLPromptStylerbySubject", + "SDXLPromptStylerbySurrealism", + "SDXLPromptStylerbyTheme", + "SDXLPromptStylerbyTimeofDay", + "SDXLPromptStylerbyWyvern", + "SDXLPromptbyCelticArt", + "SDXLPromptbyContemporaryNordicArt", + "SDXLPromptbyFashionArt", + "SDXLPromptbyGothicRevival", + "SDXLPromptbyIrishFolkArt", + "SDXLPromptbyRomanticNationalismArt", + "SDXLPromptbySportsArt", + "SDXLPromptbyStreetArt", + "SDXLPromptbyVikingArt", + "SDXLPromptbyWildlifeArt" + ], + { + "title_aux": "SDXL Prompt Styler (customized version by wolfden)" + } + ], + "https://github.com/wolfden/ComfyUi_String_Function_Tree": [ + [ + "StringFunction" + ], + { + "title_aux": "ComfyUi_String_Function_Tree" + } + ], + "https://github.com/wsippel/comfyui_ws/raw/main/sdxl_utility.py": [ + [ + "SDXLResolutionPresets" + ], + { + "title_aux": "SDXLResolutionPresets" + } + ], + "https://github.com/wutipong/ComfyUI-TextUtils": [ + [ + "Text Utils - Join N-Elements of String List", + "Text Utils - Join String List", + "Text Utils - Join Strings", + "Text Utils - Split String to List" + ], + { + "title_aux": "ComfyUI-TextUtils" + } + ], + "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio": [ + [ + "SimpleAspectRatio" + ], + { + "title_aux": "ComfyUI-Simple-Aspect-Ratio" + } + ], + "https://github.com/xXAdonesXx/NodeGPT": [ + [ + "AppendAgent", + "Assistant", + "Chat", + "ChatGPT", + "CombineInput", + "Conditioning", + "CostumeAgent_1", + "CostumeAgent_2", + "CostumeMaster_1", + "Critic", + "DisplayString", + "DisplayTextAsImage", + "EVAL", + "Engineer", + "Executor", + "GroupChat", + "Image_generation_Conditioning", + "LM_Studio", + "LoadAPIconfig", + "LoadTXT", + "MemGPT", + "Memory_Excel", + "Model_1", + "Ollama", + "Output2String", + "Planner", + "Scientist", + "TextCombine", + "TextGeneration", + "TextGenerator", + "TextInput", + "TextOutput", + "UserProxy", + "llama-cpp", + "llava", + "oobaboogaOpenAI" + ], + { + "title_aux": "NodeGPT" + } + ], + "https://github.com/xiaoxiaodesha/hd_node": [ + [ + "Combine HDMasks", + "Cover HDMasks", + "HD FaceIndex", + "HD GetMaskArea", + "HD Image Levels", + "HD SmoothEdge", + "HD UltimateSDUpscale" + ], + { + "title_aux": "hd-nodes-comfyui" + } + ], + "https://github.com/yffyhk/comfyui_auto_danbooru": [ + [ + "GetDanbooru", + "TagEncode" + ], + { + "title_aux": "comfyui_auto_danbooru" + } + ], + "https://github.com/yolain/ComfyUI-Easy-Use": [ + [ + "dynamicThresholdingFull", + "easy LLLiteLoader", + "easy XYInputs: CFG Scale", + "easy XYInputs: Checkpoint", + "easy XYInputs: ControlNet", + "easy XYInputs: Denoise", + "easy XYInputs: Lora", + "easy XYInputs: ModelMergeBlocks", + "easy XYInputs: NegativeCond", + "easy XYInputs: NegativeCondList", + "easy XYInputs: PositiveCond", + "easy XYInputs: PositiveCondList", + "easy XYInputs: PromptSR", + "easy XYInputs: Sampler/Scheduler", + "easy XYInputs: Seeds++ Batch", + "easy XYInputs: Steps", + "easy XYPlot", + "easy XYPlotAdvanced", + "easy a1111Loader", + "easy boolean", + "easy cascadeKSampler", + "easy cascadeLoader", + "easy cleanGpuUsed", + "easy comfyLoader", + "easy compare", + "easy controlnetLoader", + "easy controlnetLoaderADV", + "easy convertAnything", + "easy detailerFix", + "easy float", + "easy fooocusInpaintLoader", + "easy fullCascadeKSampler", + "easy fullLoader", + "easy fullkSampler", + "easy globalSeed", + "easy hiresFix", + "easy if", + "easy imageInsetCrop", + "easy imagePixelPerfect", + "easy imageRemoveBG", + "easy imageSave", + "easy imageScaleDown", + "easy imageScaleDownBy", + "easy imageScaleDownToSize", + "easy imageSize", + "easy imageSizeByLongerSide", + "easy imageSizeBySide", + "easy imageSwitch", + "easy imageToMask", + "easy int", + "easy isSDXL", + "easy joinImageBatch", + "easy kSampler", + "easy kSamplerDownscaleUnet", + "easy kSamplerInpainting", + "easy kSamplerSDTurbo", + "easy kSamplerTiled", + "easy latentCompositeMaskedWithCond", + "easy latentNoisy", + "easy loraStack", + "easy negative", + "easy pipeIn", + "easy pipeOut", + "easy pipeToBasicPipe", + "easy portraitMaster", + "easy poseEditor", + "easy positive", + "easy preDetailerFix", + "easy preSampling", + "easy preSamplingAdvanced", + "easy preSamplingCascade", + "easy preSamplingDynamicCFG", + "easy preSamplingSdTurbo", + "easy promptList", + "easy rangeFloat", + "easy rangeInt", + "easy samLoaderPipe", + "easy seed", + "easy showAnything", + "easy showLoaderSettingsNames", + "easy showSpentTime", + "easy string", + "easy stylesSelector", + "easy svdLoader", + "easy ultralyticsDetectorPipe", + "easy unSampler", + "easy wildcards", + "easy xyAny", + "easy zero123Loader" + ], + { + "title_aux": "ComfyUI Easy Use" + } + ], + "https://github.com/yolanother/DTAIComfyImageSubmit": [ + [ + "DTSimpleSubmitImage", + "DTSubmitImage" + ], + { + "title_aux": "Comfy AI DoubTech.ai Image Sumission Node" + } + ], + "https://github.com/yolanother/DTAIComfyLoaders": [ + [ + "DTCLIPLoader", + "DTCLIPVisionLoader", + "DTCheckpointLoader", + "DTCheckpointLoaderSimple", + "DTControlNetLoader", + "DTDiffControlNetLoader", + "DTDiffusersLoader", + "DTGLIGENLoader", + "DTLoadImage", + "DTLoadImageMask", + "DTLoadLatent", + "DTLoraLoader", + "DTLorasLoader", + "DTStyleModelLoader", + "DTUpscaleModelLoader", + "DTVAELoader", + "DTunCLIPCheckpointLoader" + ], + { + "title_aux": "Comfy UI Online Loaders" + } + ], + "https://github.com/yolanother/DTAIComfyPromptAgent": [ + [ + "DTPromptAgent", + "DTPromptAgentString" + ], + { + "title_aux": "Comfy UI Prompt Agent" + } + ], + "https://github.com/yolanother/DTAIComfyQRCodes": [ + [ + "QRCode" + ], + { + "title_aux": "Comfy UI QR Codes" + } + ], + "https://github.com/yolanother/DTAIComfyVariables": [ + [ + "DTCLIPTextEncode", + "DTSingleLineStringVariable", + "DTSingleLineStringVariableNoClip", + "FloatVariable", + "IntVariable", + "StringFormat", + "StringFormatSingleLine", + "StringVariable" + ], + { + "title_aux": "Variables for Comfy UI" + } + ], + "https://github.com/yolanother/DTAIImageToTextNode": [ + [ + "DTAIImageToTextNode", + "DTAIImageUrlToTextNode" + ], + { + "title_aux": "Image to Text Node" + } + ], + "https://github.com/youyegit/tdxh_node_comfyui": [ + [ + "TdxhBoolNumber", + "TdxhClipVison", + "TdxhControlNetApply", + "TdxhControlNetProcessor", + "TdxhFloatInput", + "TdxhImageToSize", + "TdxhImageToSizeAdvanced", + "TdxhImg2ImgLatent", + "TdxhIntInput", + "TdxhLoraLoader", + "TdxhOnOrOff", + "TdxhReference", + "TdxhStringInput", + "TdxhStringInputTranslator" + ], + { + "title_aux": "tdxh_node_comfyui" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Pronodes": [ + [ + "LoadYoutubeVideoNode" + ], + { + "title_aux": "ComfyUI-Pronodes" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Vsgan": [ + [ + "UpscaleVideoTrtNode" + ], + { + "title_aux": "ComfyUI-Vsgan" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Whisper": [ + [ + "Add Subtitles To Background", + "Add Subtitles To Frames", + "Apply Whisper", + "Resize Cropped Subtitles" + ], + { + "title_aux": "ComfyUI Whisper" + } + ], + "https://github.com/zcfrank1st/Comfyui-Toolbox": [ + [ + "PreviewJson", + "PreviewVideo", + "SaveJson", + "TestJsonPreview" + ], + { + "title_aux": "Comfyui-Toolbox" + } + ], + "https://github.com/zcfrank1st/Comfyui-Yolov8": [ + [ + "Yolov8Detection", + "Yolov8Segmentation" + ], + { + "title_aux": "ComfyUI Yolov8" + } + ], + "https://github.com/zcfrank1st/comfyui_visual_anagrams": [ + [ + "VisualAnagramsAnimate", + "VisualAnagramsSample" + ], + { + "title_aux": "comfyui_visual_anagram" + } + ], + "https://github.com/zer0TF/cute-comfy": [ + [ + "Cute.Placeholder" + ], + { + "title_aux": "Cute Comfy" + } + ], + "https://github.com/zfkun/ComfyUI_zfkun": [ + [ + "ZFLoadImagePath", + "ZFPreviewText", + "ZFPreviewTextMultiline", + "ZFShareScreen", + "ZFTextTranslation" + ], + { + "title_aux": "ComfyUI_zfkun" + } + ], + "https://github.com/zhongpei/ComfyUI-InstructIR": [ + [ + "InstructIRProcess", + "LoadInstructIRModel" + ], + { + "title_aux": "ComfyUI for InstructIR" + } + ], + "https://github.com/zhongpei/Comfyui_image2prompt": [ + [ + "Image2Text", + "LoadImage2TextModel" + ], + { + "title_aux": "Comfyui_image2prompt" + } + ], + "https://github.com/zhuanqianfish/ComfyUI-EasyNode": [ + [ + "EasyCaptureNode", + "EasyVideoOutputNode", + "SendImageWebSocket" + ], + { + "title_aux": "EasyCaptureNode for ComfyUI" + } + ], + "https://raw.githubusercontent.com/throttlekitty/SDXLCustomAspectRatio/main/SDXLAspectRatio.py": [ + [ + "SDXLAspectRatio" + ], + { + "title_aux": "SDXLCustomAspectRatio" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/.cache/2259715867_alter-list.json b/custom_nodes/ComfyUI-Manager/.cache/2259715867_alter-list.json new file mode 100644 index 0000000000000000000000000000000000000000..e909c9eedc84a06d3ccaada1b9dca83a55311b92 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/.cache/2259715867_alter-list.json @@ -0,0 +1,209 @@ +{ + "items": [ + { + "description": "This extension provides preprocessor nodes for using controlnet.", + "id": "https://github.com/Fannovel16/comfyui_controlnet_aux", + "tags": "controlnet" + }, + { + "description": "This experimental nodes contains a 'Reference Only' node and a 'ModelSamplerTonemapNoiseTest' node corresponding to the 'Dynamic Threshold'.", + "id": "https://github.com/comfyanonymous/ComfyUI_experiments", + "tags": "Dynamic Thresholding, DT, CFG, controlnet, reference only" + }, + { + "description": "To implement the feature of automatically detecting faces and enhancing details, various detection nodes and detailers provided by the Impact Pack can be applied. Similarly to Loopback Scaler, it also provides various custom workflows that can apply Ksampler while gradually scaling up.", + "id": "https://github.com/ltdrdata/ComfyUI-Impact-Pack", + "tags": "ddetailer, adetailer, ddsd, DD, loopback scaler, prompt, wildcard, dynamic prompt" + }, + { + "description": "The Inspire Pack provides the functionality of Lora Block Weight, Variation Seed.", + "id": "https://github.com/ltdrdata/ComfyUI-Inspire-Pack", + "tags": "lora block weight, effective block analyzer, lbw, variation seed" + }, + { + "description": "This extension provides a feature that generates segment masks on an image using a text prompt. When used in conjunction with Impact Pack, it enables applications such as DDSD.", + "id": "https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py", + "tags": "ddsd" + }, + { + "description": "This extension provides a way to recognize and enhance masks for faces similar to Impact Pack.", + "id": "https://github.com/BadCafeCode/masquerade-nodes-comfyui", + "tags": "ddetailer" + }, + { + "description": "By using this extension, prompts like 'blue hair' can be prevented from interfering with other prompts by blocking the attribute 'blue' from being used in prompts other than 'hair'.", + "id": "https://github.com/BlenderNeko/ComfyUI_Cutoff", + "tags": "cutoff" + }, + { + "description": "There are differences in the processing methods of prompts, such as weighting and scheduling, between A1111 and ComfyUI. With this extension, various settings can be used to implement prompt processing methods similar to A1111. As this feature is also integrated into ComfyUI Cutoff, please download the Cutoff extension if you plan to use it in conjunction with Cutoff.", + "id": "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb", + "tags": "prompt, weight" + }, + { + "description": "There are differences in the processing methods of prompts, such as weighting and scheduling, between A1111 and ComfyUI. This extension helps to reproduce the same embedding as A1111.", + "id": "https://github.com/shiimizu/ComfyUI_smZNodes", + "tags": "prompt, weight" + }, + { + "description": "The extension provides an unsampler that reverses the sampling process, allowing for a function similar to img2img alt to be implemented. Furthermore, ComfyUI uses CPU's Random instead of GPU's Random for better reproducibility compared to A1111. This extension provides the ability to use GPU's Random for Latent Noise. However, since GPU's Random may vary depending on the GPU model, reproducibility on different devices cannot be guaranteed.", + "id": "https://github.com/BlenderNeko/ComfyUI_Noise", + "tags": "img2img alt, random" + }, + { + "description": "The extension provides seecoder feature.", + "id": "https://github.com/BlenderNeko/ComfyUI_SeeCoder", + "tags": "seecoder, prompt-free-diffusion" + }, + { + "description": "This extension provides features such as a wildcard function that randomly selects prompts belonging to a category and the ability to directly load lora from prompts.", + "id": "https://github.com/lilly1987/ComfyUI_node_Lilly", + "tags": "prompt, wildcard" + }, + { + "description": "ComfyUI already provides the ability to composite latents by default. However, this extension makes it more convenient to use by visualizing the composite area.", + "id": "https://github.com/Davemane42/ComfyUI_Dave_CustomNode", + "tags": "latent couple" + }, + { + "description": "This tool provides a viewer node that allows for checking multiple outputs in a grid, similar to the X/Y Plot extension.", + "id": "https://github.com/LEv145/images-grid-comfy-plugin", + "tags": "X/Y Plot" + }, + { + "description": "This extension generates clip text by taking an image as input and using the Deepbooru model.", + "id": "https://github.com/pythongosssss/ComfyUI-WD14-Tagger", + "tags": "deepbooru, clip interrogation" + }, + { + "description": "This node takes two models, merges individual blocks together at various ratios, and automatically rates each merge, keeping the ratio with the highest score. ", + "id": "https://github.com/szhublox/ambw_comfyui", + "tags": "supermerger" + }, + { + "description": "ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Uses the same script used in the A1111 extension to hopefully replicate images generated using the A1111 webui.", + "id": "https://github.com/ssitu/ComfyUI_UltimateSDUpscale", + "tags": "upscaler, Ultimate SD Upscale" + }, + { + "description": "A1111 provides KSampler that uses GPU-based random noise. This extension offers KSampler utilizing GPU-based random noise.", + "id": "https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py", + "tags": "random, noise" + }, + { + "description": "This extension provides nodes with the functionality of dynamic prompts.", + "id": "https://github.com/space-nuko/nui-suite", + "tags": "prompt, dynamic prompt" + }, + { + "description": "This extension provides bunch of nodes including roop", + "id": "https://github.com/melMass/comfy_mtb", + "tags": "roop" + }, + { + "description": "This extension provides nodes for the roop A1111 webui script.", + "id": "https://github.com/ssitu/ComfyUI_roop", + "tags": "roop" + }, + { + "description": "This extension provides the ability to use prompts like \n\n**a [large::0.1] [cat|dog:0.05] [::0.5] [in a park:in space:0.4]**\n\n", + "id": "https://github.com/asagi4/comfyui-prompt-control", + "tags": "prompt, prompt editing" + }, + { + "description": "This extension is a port of sd-dynamic-prompt to ComfyUI.", + "id": "https://github.com/adieyal/comfyui-dynamicprompts", + "tags": "prompt, dynamic prompt" + }, + { + "description": "A Anime Background Remover node for comfyui, based on this hf space, works same as AGB extention in automatic1111.", + "id": "https://github.com/kwaroran/abg-comfyui", + "tags": "abg, background remover" + }, + { + "description": "This is a ported version of ComfyUI for the sd-webui-roop-nsfw extension.", + "id": "https://github.com/Gourieff/comfyui-reactor-node", + "tags": "reactor, sd-webui-roop-nsfw" + }, + { + "description": "This custom nodes provide a functionality similar to regional prompts, offering couple features at the attention level.", + "id": "https://github.com/laksjdjf/attention-couple-ComfyUI", + "tags": "regional prompt, latent couple, prompt" + }, + { + "description": "This custom nodes provide functionality that assists in animation creation, similar to deforum.", + "id": "https://github.com/FizzleDorf/ComfyUI_FizzNodes", + "tags": "deforum" + }, + { + "description": "This custom nodes provide functionality that assists in animation creation, similar to deforum.", + "id": "https://github.com/seanlynch/comfyui-optical-flow", + "tags": "deforum, vid2vid" + }, + { + "description": "Similar to sd-webui-fabric, this custom nodes provide the functionality of [a/FABRIC](https://github.com/sd-fabric/fabric).", + "id": "https://github.com/ssitu/ComfyUI_fabric", + "tags": "fabric" + }, + { + "description": "Similar to text-generation-webui, this custom nodes provide the functionality of [a/exllama](https://github.com/turboderp/exllama).", + "id": "https://github.com/Zuellni/ComfyUI-ExLlama", + "tags": "ExLlama, prompt, language model" + }, + { + "description": "ComfyUI node for generating seamless textures Replicates 'Tiling' option from A1111", + "id": "https://github.com/spinagon/ComfyUI-seamless-tiling", + "tags": "tiling" + }, + { + "description": "This extension is a port of the [a/sd-webui-cd-tuner](https://github.com/hako-mikan/sd-webui-cd-tuner)(a.k.a. CD(color/Detail) Tuner )and [a/sd-webui-negpip](https://github.com/hako-mikan/sd-webui-negpip)(a.k.a. NegPiP) extensions of A1111 to ComfyUI.", + "id": "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI", + "tags": "cd-tuner, negpip" + }, + { + "description": "This custom node is a port of the Dynamic Thresholding extension from A1111 to make it available for use in ComfyUI.", + "id": "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding", + "tags": "DT, dynamic thresholding" + }, + { + "description": "This extension provides custom nodes developed based on [a/LaMa](https://github.com/advimman/lama) and [a/Inpainting anything](https://github.com/geekyutao/Inpaint-Anything).", + "id": "https://github.com/hhhzzyang/Comfyui_Lama", + "tags": "lama, inpainting anything" + }, + { + "description": "This extension provides custom nodes for [a/LaMa](https://github.com/advimman/lama) functionality.", + "id": "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor", + "tags": "lama" + }, + { + "description": "This extension provides custom nodes for [a/SD Webui Diffusion Color Grading](https://github.com/Haoming02/sd-webui-diffusion-cg) functionality.", + "id": "https://github.com/Haoming02/comfyui-diffusion-cg", + "tags": "diffusion-cg" + }, + { + "description": "This extension provides custom nodes for [a/sd-webui-cads](https://github.com/v0xie/sd-webui-cads) functionality.", + "id": "https://github.com/asagi4/ComfyUI-CADS", + "tags": "diffusion-cg" + }, + { + "description": "This extension supports both A1111 and ComfyUI simultaneously.", + "id": "https://git.mmaker.moe/mmaker/sd-webui-color-enhance", + "tags": "color-enhance" + }, + { + "description": "This extension provides custom nodes for [a/Mixture of Diffusers](https://github.com/albarji/mixture-of-diffusers) and [a/MultiDiffusion](https://github.com/omerbt/MultiDiffusion)", + "id": "https://github.com/shiimizu/ComfyUI-TiledDiffusion", + "tags": "multidiffusion" + }, + { + "description": "This extension provides some alternative functionalities of the [a/sd-webui-bmab](https://github.com/portu-sim/sd-webui-bmab) extension.", + "id": "https://github.com/abyz22/image_control", + "tags": "BMAB" + }, + { + "description": "This extension provides some alternative functionalities of the [a/stable-diffusion-webui-sonar](https://github.com/Kahsolt/stable-diffusion-webui-sonar) extension.", + "id": "https://github.com/blepping/ComfyUI-sonar", + "tags": "sonar" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/.cache/4245046894_model-list.json b/custom_nodes/ComfyUI-Manager/.cache/4245046894_model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..b03cf0bd5d2b399e80db59eaf5fb6eec05971998 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/.cache/4245046894_model-list.json @@ -0,0 +1,2024 @@ +{ + "models": [ + { + "base": "SDXL", + "description": "(SDXL Verison) To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "filename": "taesdxl_decoder.pth", + "name": "TAESDXL Decoder", + "reference": "https://github.com/madebyollin/taesd", + "save_path": "vae_approx", + "type": "TAESD", + "url": "https://github.com/madebyollin/taesd/raw/main/taesdxl_decoder.pth" + }, + { + "base": "SDXL", + "description": "(SDXL Verison) To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "filename": "taesdxl_encoder.pth", + "name": "TAESDXL Encoder", + "reference": "https://github.com/madebyollin/taesd", + "save_path": "vae_approx", + "type": "TAESD", + "url": "https://github.com/madebyollin/taesd/raw/main/taesdxl_encoder.pth" + }, + { + "base": "SD1.x", + "description": "To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "filename": "taesd_decoder.pth", + "name": "TAESD Decoder", + "reference": "https://github.com/madebyollin/taesd", + "save_path": "vae_approx", + "type": "TAESD", + "url": "https://github.com/madebyollin/taesd/raw/main/taesd_decoder.pth" + }, + { + "base": "SD1.x", + "description": "To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "filename": "taesd_encoder.pth", + "name": "TAESD Encoder", + "reference": "https://github.com/madebyollin/taesd", + "save_path": "vae_approx", + "type": "TAESD", + "url": "https://github.com/madebyollin/taesd/raw/main/taesd_encoder.pth" + }, + { + "base": "upscale", + "description": "RealESRGAN x2 upscaler model", + "filename": "RealESRGAN_x2.pth", + "name": "RealESRGAN x2", + "reference": "https://huggingface.co/ai-forever/Real-ESRGAN", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/ai-forever/Real-ESRGAN/resolve/main/RealESRGAN_x2.pth" + }, + { + "base": "upscale", + "description": "RealESRGAN x4 upscaler model", + "filename": "RealESRGAN_x4.pth", + "name": "RealESRGAN x4", + "reference": "https://huggingface.co/ai-forever/Real-ESRGAN", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/ai-forever/Real-ESRGAN/resolve/main/RealESRGAN_x4.pth" + }, + { + "base": "upscale", + "description": "ESRGAN x4 upscaler model", + "filename": "ESRGAN_4x.pth", + "name": "ESRGAN x4", + "reference": "https://huggingface.co/Afizi/ESRGAN_4x.pth", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/Afizi/ESRGAN_4x.pth/resolve/main/ESRGAN_4x.pth" + }, + { + "base": "upscale", + "description": "4x_foolhardy_Remacri upscaler model", + "filename": "4x_foolhardy_Remacri.pth", + "name": "4x_foolhardy_Remacri", + "reference": "https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri/resolve/main/4x_foolhardy_Remacri.pth" + }, + { + "base": "upscale", + "description": "4x-AnimeSharp upscaler model", + "filename": "4x-AnimeSharp.pth", + "name": "4x-AnimeSharp", + "reference": "https://huggingface.co/Kim2091/AnimeSharp/", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/Kim2091/AnimeSharp/resolve/main/4x-AnimeSharp.pth" + }, + { + "base": "upscale", + "description": "4x-UltraSharp upscaler model", + "filename": "4x-UltraSharp.pth", + "name": "4x-UltraSharp", + "reference": "https://huggingface.co/Kim2091/UltraSharp/", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/Kim2091/UltraSharp/resolve/main/4x-UltraSharp.pth" + }, + { + "base": "upscale", + "description": "4x_NMKD-Siax_200k upscaler model", + "filename": "4x_NMKD-Siax_200k.pth", + "name": "4x_NMKD-Siax_200k", + "reference": "https://huggingface.co/gemasai/4x_NMKD-Siax_200k", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/gemasai/4x_NMKD-Siax_200k/resolve/main/4x_NMKD-Siax_200k.pth" + }, + { + "base": "upscale", + "description": "8x_NMKD-Superscale_150000_G upscaler model", + "filename": "8x_NMKD-Superscale_150000_G.pth", + "name": "8x_NMKD-Superscale_150000_G", + "reference": "https://huggingface.co/uwg/upscaler", + "save_path": "default", + "type": "upscale", + "url": "https://huggingface.co/uwg/upscaler/resolve/main/ESRGAN/8x_NMKD-Superscale_150000_G.pth" + }, + { + "base": "upscale", + "description": "LDSR upscale model. Through the [a/ComfyUI-Flowty-LDSR](https://github.com/flowtyone/ComfyUI-Flowty-LDSR) extension, the upscale model can be utilized.", + "filename": "last.ckpt", + "name": "LDSR(Latent Diffusion Super Resolution)", + "reference": "https://github.com/CompVis/latent-diffusion", + "save_path": "upscale_models/ldsr", + "type": "upscale", + "url": "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" + }, + { + "base": "upscale", + "description": "[3.53GB] This upscaling model is a latent text-guided diffusion model and should be used with SD_4XUpscale_Conditioning and KSampler.", + "filename": "x4-upscaler-ema.safetensors", + "name": "stabilityai/stable-diffusion-x4-upscaler", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler", + "save_path": "checkpoints/upscale", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.safetensors" + }, + { + "base": "inswapper", + "description": "[264MB] Checkpoint of the insightface swapper model\n(used by ComfyUI-FaceSwap, comfyui-reactor-node, CharacterFaceSwap,\nComfyUI roop and comfy_mtb)", + "filename": "inswapper_128_fp16.onnx", + "name": "Inswapper-fp16 (face swap)", + "reference": "https://github.com/facefusion/facefusion-assets", + "save_path": "insightface", + "type": "insightface", + "url": "https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128_fp16.onnx" + }, + { + "base": "inswapper", + "description": "[529MB] Checkpoint of the insightface swapper model\n(used by ComfyUI-FaceSwap, comfyui-reactor-node, CharacterFaceSwap,\nComfyUI roop and comfy_mtb)", + "filename": "inswapper_128.onnx", + "name": "Inswapper (face swap)", + "reference": "https://github.com/facefusion/facefusion-assets", + "save_path": "insightface", + "type": "insightface", + "url": "https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx" + }, + { + "base": "deepbump", + "description": "Checkpoint of the deepbump model to generate height and normal maps textures from an image (requires comfy_mtb)", + "filename": "deepbump256.onnx", + "name": "Deepbump", + "reference": "https://github.com/HugoTini/DeepBump", + "save_path": "deepbump", + "type": "deepbump", + "url": "https://github.com/HugoTini/DeepBump/raw/master/deepbump256.onnx" + }, + { + "base": "face_restore", + "description": "Face restoration", + "filename": "GFPGANv1.3.pth", + "name": "GFPGAN 1.3", + "reference": "https://github.com/TencentARC/GFPGAN", + "save_path": "face_restore", + "type": "face_restore", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth" + }, + { + "base": "face_restore", + "description": "Face restoration", + "filename": "GFPGANv1.4.pth", + "name": "GFPGAN 1.4", + "reference": "https://github.com/TencentARC/GFPGAN", + "save_path": "face_restore", + "type": "face_restore", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" + }, + { + "base": "face_restore", + "description": "Face restoration", + "filename": "RestoreFormer.pth", + "name": "RestoreFormer", + "reference": "https://github.com/TencentARC/GFPGAN", + "save_path": "face_restore", + "type": "face_restore", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth" + }, + { + "base": "SVD", + "description": "Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.\nNOTE: 14 frames @ 576x1024", + "filename": "svd.safetensors", + "name": "Stable Video Diffusion Image-to-Video", + "reference": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid", + "save_path": "checkpoints/SVD", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/resolve/main/svd.safetensors" + }, + { + "base": "zero123", + "description": "Stable Zero123 is a model for view-conditioned image generation based on [a/Zero123](https://github.com/cvlab-columbia/zero123).", + "filename": "stable_zero123.ckpt", + "name": "stabilityai/Stable Zero123", + "reference": "https://huggingface.co/stabilityai/stable-zero123", + "save_path": "checkpoints/zero123", + "type": "zero123", + "url": "https://huggingface.co/stabilityai/stable-zero123/resolve/main/stable_zero123.ckpt" + }, + { + "base": "SVD", + "description": "Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.\nNOTE: 25 frames @ 576x1024 ", + "filename": "svd_xt.safetensors", + "name": "Stable Video Diffusion Image-to-Video (XT)", + "reference": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt", + "save_path": "checkpoints/SVD", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors" + }, + { + "base": "SD1.5", + "description": "If you use this embedding with negatives, you can solve the issue of damaging your hands.", + "filename": "negative_hand-neg.pt", + "name": "negative_hand Negative Embedding", + "reference": "https://civitai.com/models/56519/negativehand-negative-embedding", + "save_path": "default", + "type": "embeddings", + "url": "https://civitai.com/api/download/models/60938" + }, + { + "base": "SD1.5", + "description": "The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding.", + "filename": "bad_prompt_version2-neg.pt", + "name": "bad_prompt Negative Embedding", + "reference": "https://civitai.com/models/55700/badprompt-negative-embedding", + "save_path": "default", + "type": "embeddings", + "url": "https://civitai.com/api/download/models/60095" + }, + { + "base": "SD1.5", + "description": "These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.", + "filename": "ng_deepnegative_v1_75t.pt", + "name": "Deep Negative V1.75", + "reference": "https://civitai.com/models/4629/deep-negative-v1x", + "save_path": "default", + "type": "embeddings", + "url": "https://civitai.com/api/download/models/5637" + }, + { + "base": "SD1.5", + "description": "This embedding should be used in your NEGATIVE prompt. Adjust the strength as desired (seems to scale well without any distortions), the strength required may vary based on positive and negative prompts.", + "filename": "easynegative.safetensors", + "name": "EasyNegative", + "reference": "https://civitai.com/models/7808/easynegative", + "save_path": "default", + "type": "embeddings", + "url": "https://civitai.com/api/download/models/9208" + }, + { + "base": "Stable Cascade", + "description": "[4.55GB] Stable Cascade stage_b checkpoints", + "filename": "stable_cascade_stage_b.safetensors", + "name": "stabilityai/comfyui_checkpoints/stable_cascade_stage_b.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "checkpoints/Stable-Cascade", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/comfyui_checkpoints/stable_cascade_stage_b.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[9.22GB] Stable Cascade stage_c checkpoints", + "filename": "stable_cascade_stage_c.safetensors", + "name": "stabilityai/comfyui_checkpoints/stable_cascade_stage_c.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "checkpoints/Stable-Cascade", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/comfyui_checkpoints/stable_cascade_stage_c.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[73.7MB] Stable Cascade: stage_a", + "filename": "stage_a.safetensors", + "name": "stabilityai/Stable Cascade: stage_a.safetensors (VAE)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "vae/Stable-Cascade", + "type": "VAE", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_a.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[81.5MB] Stable Cascade: effnet_encoder.\nVAE encoder for stage_c latent.", + "filename": "effnet_encoder.safetensors", + "name": "stabilityai/Stable Cascade: effnet_encoder.safetensors (VAE)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "vae/Stable-Cascade", + "type": "VAE", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/effnet_encoder.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[6.25GB] Stable Cascade: stage_b", + "filename": "stage_b.safetensors", + "name": "stabilityai/Stable Cascade: stage_b.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[3.13GB] Stable Cascade: stage_b/bf16", + "filename": "stage_b_bf16.safetensors", + "name": "stabilityai/Stable Cascade: stage_b_bf16.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_bf16.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[2.8GB] Stable Cascade: stage_b/lite", + "filename": "stage_b_lite.safetensors", + "name": "stabilityai/Stable Cascade: stage_b_lite.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_lite.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[1.4GB] Stable Cascade: stage_b/bf16,lite", + "filename": "stage_b_lite_bf16.safetensors", + "name": "stabilityai/Stable Cascade: stage_b_lite.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_lite_bf16.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[14.4GB] Stable Cascade: stage_c", + "filename": "stage_c.safetensors", + "name": "stabilityai/Stable Cascade: stage_c.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[7.18GB] Stable Cascade: stage_c/bf16", + "filename": "stage_c_bf16.safetensors", + "name": "stabilityai/Stable Cascade: stage_c_bf16.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_bf16.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[4.12GB] Stable Cascade: stage_c/lite", + "filename": "stage_c_lite.safetensors", + "name": "stabilityai/Stable Cascade: stage_c_lite.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_lite.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[2.06GB] Stable Cascade: stage_c/bf16,lite", + "filename": "stage_c_lite_bf16.safetensors", + "name": "stabilityai/Stable Cascade: stage_c_lite.safetensors (UNET)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "unet/Stable-Cascade", + "type": "unet", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_lite_bf16.safetensors" + }, + { + "base": "Stable Cascade", + "description": "[1.39GB] Stable Cascade: text_encoder", + "filename": "model.safetensors", + "name": "stabilityai/Stable Cascade: text_encoder (CLIP)", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "save_path": "clip/Stable-Cascade", + "type": "clip", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/text_encoder/model.safetensors" + }, + { + "base": "SDXL", + "description": "[6.9GB] SDXL-Turbo 1.0 fp16", + "filename": "sd_xl_turbo_1.0_fp16.safetensors", + "name": "SDXL-Turbo 1.0 (fp16)", + "reference": "https://huggingface.co/stabilityai/sdxl-turbo", + "save_path": "checkpoints/SDXL-TURBO", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors" + }, + { + "base": "SDXL", + "description": "[13.9GB] SDXL-Turbo 1.0", + "filename": "sd_xl_turbo_1.0.safetensors", + "name": "SDXL-Turbo 1.0", + "reference": "https://huggingface.co/stabilityai/sdxl-turbo", + "save_path": "checkpoints/SDXL-TURBO", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0.safetensors" + }, + { + "base": "SDXL", + "description": "Stable Diffusion XL base model (VAE 0.9)", + "filename": "sd_xl_base_1.0_0.9vae.safetensors", + "name": "sd_xl_base_1.0_0.9vae.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors" + }, + { + "base": "SDXL", + "description": "Stable Diffusion XL base model", + "filename": "sd_xl_base_1.0.safetensors", + "name": "sd_xl_base_1.0.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors" + }, + { + "base": "SDXL", + "description": "Stable Diffusion XL refiner model (VAE 0.9)", + "filename": "sd_xl_refiner_1.0_0.9vae.safetensors", + "name": "sd_xl_refiner_1.0_0.9vae.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors" + }, + { + "base": "SDXL", + "description": "Stable Diffusion XL refiner model", + "filename": "sd_xl_refiner_1.0.safetensors", + "name": "stable-diffusion-xl-refiner-1.0", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors" + }, + { + "base": "SDXL", + "description": "[5.14GB] Stable Diffusion XL inpainting model 0.1. You need UNETLoader instead of CheckpointLoader.", + "filename": "diffusion_pytorch_model.fp16.safetensors", + "name": "diffusers/stable-diffusion-xl-1.0-inpainting-0.1 (UNET/fp16)", + "reference": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1", + "save_path": "unet/xl-inpaint-0.1", + "type": "unet", + "url": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1/resolve/main/unet/diffusion_pytorch_model.fp16.safetensors" + }, + { + "base": "SDXL", + "description": "[10.3GB] Stable Diffusion XL inpainting model 0.1. You need UNETLoader instead of CheckpointLoader.", + "filename": "diffusion_pytorch_model.safetensors", + "name": "diffusers/stable-diffusion-xl-1.0-inpainting-0.1 (UNET)", + "reference": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1", + "save_path": "unet/xl-inpaint-0.1", + "type": "unet", + "url": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1/resolve/main/unet/diffusion_pytorch_model.safetensors" + }, + { + "base": "SDXL", + "description": "Stable Diffusion XL offset LoRA", + "filename": "sd_xl_offset_example-lora_1.0.safetensors", + "name": "sd_xl_offset_example-lora_1.0.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0", + "save_path": "default", + "type": "lora", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" + }, + { + "base": "SD1.5", + "description": "Stable Diffusion 1.5 base model", + "filename": "v1-5-pruned-emaonly.ckpt", + "name": "v1-5-pruned-emaonly.ckpt", + "reference": "https://huggingface.co/runwayml/stable-diffusion-v1-5", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt" + }, + { + "base": "SD2", + "description": "Stable Diffusion 2 base model (512)", + "filename": "v2-1_512-ema-pruned.safetensors", + "name": "v2-1_512-ema-pruned.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-2-1-base", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors" + }, + { + "base": "SD2", + "description": "Stable Diffusion 2 base model (768)", + "filename": "v2-1_768-ema-pruned.safetensors", + "name": "v2-1_768-ema-pruned.safetensors", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-2-1", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors" + }, + { + "base": "SD1.5", + "description": "AbyssOrangeMix2 - hard version (anime style)", + "filename": "AbyssOrangeMix2_hard.safetensors", + "name": "AbyssOrangeMix2 (hard)", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors" + }, + { + "base": "SD1.5", + "description": "AbyssOrangeMix3 - A1 (anime style)", + "filename": "AOM3A1_orangemixs.safetensors", + "name": "AbyssOrangeMix3 A1", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1_orangemixs.safetensors" + }, + { + "base": "SD1.5", + "description": "AbyssOrangeMix - A3 (anime style)", + "filename": "AOM3A3_orangemixs.safetensors", + "name": "AbyssOrangeMix3 A3", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3_orangemixs.safetensors" + }, + { + "base": "SD1.5", + "description": "Anything v3 (anime style)", + "filename": "anything-v3-fp16-pruned.safetensors", + "name": "Anything v3 (fp16; pruned)", + "reference": "https://huggingface.co/Linaqruf/anything-v3.0", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors" + }, + { + "base": "SD2.1", + "description": "Waifu Diffusion 1.5 Beta3", + "filename": "wd-illusion-fp16.safetensors", + "name": "Waifu Diffusion 1.5 Beta3 (fp16)", + "reference": "https://huggingface.co/waifu-diffusion/wd-1-5-beta3", + "save_path": "default", + "type": "checkpoints", + "url": "https://huggingface.co/waifu-diffusion/wd-1-5-beta3/resolve/main/wd-illusion-fp16.safetensors" + }, + { + "base": "SD2.1", + "description": "Mix model (SD2.1 unCLIP + illuminatiDiffusionV1_v11)", + "filename": "illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors", + "name": "illuminatiDiffusionV1_v11 unCLIP model", + "reference": "https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP", + "save_path": "default", + "type": "unclip", + "url": "https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP/resolve/main/illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors" + }, + { + "base": "SD2.1", + "description": "Mix model (SD2.1 unCLIP + Waifu Diffusion 1.5)", + "filename": "wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors", + "name": "Waifu Diffusion 1.5 unCLIP model", + "reference": "https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP", + "save_path": "default", + "type": "unclip", + "url": "https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP/resolve/main/wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors" + }, + { + "base": "SDXL VAE", + "description": "SDXL-VAE", + "filename": "sdxl_vae.safetensors", + "name": "sdxl_vae.safetensors", + "reference": "https://huggingface.co/stabilityai/sdxl-vae", + "save_path": "default", + "type": "VAE", + "url": "https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors" + }, + { + "base": "SD1.5 VAE", + "description": "vae-ft-mse-840000-ema-pruned", + "filename": "vae-ft-mse-840000-ema-pruned.safetensors", + "name": "vae-ft-mse-840000-ema-pruned", + "reference": "https://huggingface.co/stabilityai/sd-vae-ft-mse-original", + "save_path": "default", + "type": "VAE", + "url": "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" + }, + { + "base": "SD1.5 VAE", + "description": "orangemix vae model", + "filename": "orangemix.vae.pt", + "name": "orangemix.vae", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "save_path": "default", + "type": "VAE", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt" + }, + { + "base": "SD2.1 VAE", + "description": "kl-f8-anime2 vae model", + "filename": "kl-f8-anime2.ckpt", + "name": "kl-f8-anime2", + "reference": "https://huggingface.co/hakurei/waifu-diffusion-v1-4", + "save_path": "default", + "type": "VAE", + "url": "https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt" + }, + { + "base": "SD1.5 VAE", + "description": "[2.3GB] OpenAI Consistency Decoder. Improved decoding for stable diffusion vaes.", + "filename": "decoder.pt", + "name": "OpenAI Consistency Decoder", + "reference": "https://github.com/openai/consistencydecoder", + "save_path": "vae/openai_consistency_decoder", + "type": "VAE", + "url": "https://openaipublic.azureedge.net/diff-vae/c9cebd3132dd9c42936d803e33424145a748843c8f716c0814838bdc8a2fe7cb/decoder.pt" + }, + { + "base": "SD1.5", + "description": "Latent Consistency LoRA for SD1.5", + "filename": "pytorch_lora_weights.safetensors", + "name": "LCM LoRA SD1.5", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-sdv1-5", + "save_path": "loras/lcm/SD1.5", + "type": "lora", + "url": "https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "base": "SSD-1B", + "description": "Latent Consistency LoRA for SSD-1B", + "filename": "pytorch_lora_weights.safetensors", + "name": "LCM LoRA SSD-1B", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-ssd-1b", + "save_path": "loras/lcm/SSD-1B", + "type": "lora", + "url": "https://huggingface.co/latent-consistency/lcm-lora-ssd-1b/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "base": "SSD-1B", + "description": "Latent Consistency LoRA for SDXL", + "filename": "pytorch_lora_weights.safetensors", + "name": "LCM LoRA SDXL", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-sdxl", + "save_path": "loras/lcm/SDXL", + "type": "lora", + "url": "https://huggingface.co/latent-consistency/lcm-lora-sdxl/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "base": "segmind-vega", + "description": "The Segmind-Vega Model is a distilled version of the Stable Diffusion XL (SDXL), offering a remarkable 70% reduction in size and an impressive 100% speedup while retaining high-quality text-to-image generation capabilities.", + "filename": "segmind-vega.safetensors", + "name": "Segmind-Vega", + "reference": "https://huggingface.co/segmind/Segmind-Vega", + "save_path": "checkpoints/segmind-vega", + "type": "checkpoints", + "url": "https://huggingface.co/segmind/Segmind-Vega/resolve/main/segmind-vega.safetensors" + }, + { + "base": "segmind-vega", + "description": "Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.", + "filename": "pytorch_lora_weights.safetensors", + "name": "Segmind-VegaRT - Latent Consistency Model (LCM) LoRA of Segmind-Vega", + "reference": "https://huggingface.co/segmind/Segmind-VegaRT", + "save_path": "loras/segmind-vega", + "type": "lora", + "url": "https://huggingface.co/segmind/Segmind-VegaRT/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "base": "SD2.1", + "description": "LORA: Theovercomer8's Contrast Fix (SD2.1)", + "filename": "theovercomer8sContrastFix_sd21768.safetensors", + "name": "Theovercomer8's Contrast Fix (SD2.1)", + "reference": "https://civitai.com/models/8765/theovercomer8s-contrast-fix-sd15sd21-768", + "save_path": "default", + "type": "lora", + "url": "https://civitai.com/api/download/models/10350" + }, + { + "base": "SD1.5", + "description": "LORA: Theovercomer8's Contrast Fix (SD1.5)", + "filename": "theovercomer8sContrastFix_sd15.safetensors", + "name": "Theovercomer8's Contrast Fix (SD1.5)", + "reference": "https://civitai.com/models/8765/theovercomer8s-contrast-fix-sd15sd21-768", + "save_path": "default", + "type": "lora", + "url": "https://civitai.com/api/download/models/10638" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for depth", + "filename": "t2iadapter_depth_sd14v1.pth", + "name": "T2I-Adapter (depth)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for seg", + "filename": "t2iadapter_seg_sd14v1.pth", + "name": "T2I-Adapter (seg)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for sketch", + "filename": "t2iadapter_sketch_sd14v1.pth", + "name": "T2I-Adapter (sketch)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for keypose", + "filename": "t2iadapter_keypose_sd14v1.pth", + "name": "T2I-Adapter (keypose)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for openpose", + "filename": "t2iadapter_openpose_sd14v1.pth", + "name": "T2I-Adapter (openpose)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for color", + "filename": "t2iadapter_color_sd14v1.pth", + "name": "T2I-Adapter (color)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter for canny", + "filename": "t2iadapter_canny_sd14v1.pth", + "name": "T2I-Adapter (canny)", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Adapter", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "ControlNet T2I-Adapter style model. Need to download CLIPVision model.", + "filename": "t2iadapter_style_sd14v1.pth", + "name": "T2I-Style model", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "save_path": "default", + "type": "T2I-Style", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth" + }, + { + "base": "SD1.5", + "description": "TemporalNet was a ControlNet model designed to enhance the temporal consistency of generated outputs", + "filename": "temporalnetversion2.safetensors", + "name": "CiaraRowles/TemporalNet2", + "reference": "https://huggingface.co/CiaraRowles/TemporalNet2", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/CiaraRowles/TemporalNet2/resolve/main/temporalnetversion2.safetensors" + }, + { + "base": "SD1.5", + "description": "This is TemporalNet1XL, it is a re-train of the controlnet TemporalNet1 with Stable Diffusion XL.", + "filename": "diffusion_pytorch_model.safetensors", + "name": "CiaraRowles/TemporalNet1XL (1.0)", + "reference": "https://huggingface.co/CiaraRowles/controlnet-temporalnet-sdxl-1.0", + "save_path": "controlnet/TemporalNet1XL", + "type": "controlnet", + "url": "https://huggingface.co/CiaraRowles/controlnet-temporalnet-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors" + }, + { + "base": "vit-g", + "description": "[3.69GB] clip_g vision model", + "filename": "clip_vision_g.safetensors", + "name": "CLIPVision model (stabilityai/clip_vision_g)", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "clip_vision", + "type": "clip_vision", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/revision/clip_vision_g.safetensors" + }, + { + "base": "ViT-L", + "description": "[1.7GB] CLIPVision model (needed for styles model)", + "filename": "clip-vit-large-patch14.bin", + "name": "CLIPVision model (openai/clip-vit-large)", + "reference": "https://huggingface.co/openai/clip-vit-large-patch14", + "save_path": "clip_vision", + "type": "clip_vision", + "url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors" + }, + { + "base": "ViT-H", + "description": "[2.5GB] CLIPVision model (needed for IP-Adapter)", + "filename": "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors", + "name": "CLIPVision model (IP-Adapter) CLIP-ViT-H-14-laion2B-s32B-b79K", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "clip_vision", + "type": "clip_vision", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors" + }, + { + "base": "ViT-G", + "description": "[3.69GB] CLIPVision model (needed for IP-Adapter)", + "filename": "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors", + "name": "CLIPVision model (IP-Adapter) CLIP-ViT-bigG-14-laion2B-39B-b160k", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "clip_vision", + "type": "clip_vision", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: canny rank128", + "filename": "control-lora-canny-rank128.safetensors", + "name": "stabilityai/control-lora-canny-rank128.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-canny-rank128.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: depth rank128", + "filename": "control-lora-depth-rank128.safetensors", + "name": "stabilityai/control-lora-depth-rank128.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-depth-rank128.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: recolor rank128", + "filename": "control-lora-recolor-rank128.safetensors", + "name": "stabilityai/control-lora-recolor-rank128.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-recolor-rank128.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: sketch rank128 metadata", + "filename": "control-lora-sketch-rank128-metadata.safetensors", + "name": "stabilityai/control-lora-sketch-rank128-metadata.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-sketch-rank128-metadata.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: canny rank256", + "filename": "control-lora-canny-rank256.safetensors", + "name": "stabilityai/control-lora-canny-rank256.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: depth rank256", + "filename": "control-lora-depth-rank256.safetensors", + "name": "stabilityai/control-lora-depth-rank256.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: recolor rank256", + "filename": "control-lora-recolor-rank256.safetensors", + "name": "stabilityai/control-lora-recolor-rank256.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-recolor-rank256.safetensors" + }, + { + "base": "SDXL", + "description": "Control-LoRA: sketch rank256", + "filename": "control-lora-sketch-rank256.safetensors", + "name": "stabilityai/control-lora-sketch-rank256.safetensors", + "reference": "https://huggingface.co/stabilityai/control-lora", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-sketch-rank256.safetensors" + }, + { + "base": "SDXL", + "description": "[46.2MB] An extremely compactly designed controlnet model (a.k.a. ControlNet-LLLite). Note: The model structure is highly experimental and may be subject to change in the future.", + "filename": "controllllite_v01032064e_sdxl_canny_anime.safetensors", + "name": "kohya-ss/ControlNet-LLLite: SDXL Canny Anime", + "reference": "https://huggingface.co/kohya-ss/controlnet-lllite", + "save_path": "custom_nodes/ControlNet-LLLite-ComfyUI/models", + "type": "controlnet", + "url": "https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_canny_anime.safetensors" + }, + { + "base": "SDXL", + "description": "ControlNet openpose model for SDXL", + "filename": "OpenPoseXL2.safetensors", + "name": "SDXL-controlnet: OpenPose (v2)", + "reference": "https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/resolve/main/OpenPoseXL2.safetensors" + }, + { + "base": "SDXL", + "description": "ControlNet softedge model for SDXL", + "filename": "controlnet-sd-xl-1.0-softedge-dexined.safetensors", + "name": "controlnet-SargeZT/controlnet-sd-xl-1.0-softedge-dexined", + "reference": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-softedge-dexined", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-softedge-dexined/resolve/main/controlnet-sd-xl-1.0-softedge-dexined.safetensors" + }, + { + "base": "SDXL", + "description": "ControlNet depth-zoe model for SDXL", + "filename": "depth-zoe-xl-v1.0-controlnet.safetensors", + "name": "controlnet-SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe", + "reference": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe/resolve/main/depth-zoe-xl-v1.0-controlnet.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (ip2p)", + "filename": "control_v11e_sd15_ip2p_fp16.safetensors", + "name": "ControlNet-v1-1 (ip2p; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (shuffle)", + "filename": "control_v11e_sd15_shuffle_fp16.safetensors", + "name": "ControlNet-v1-1 (shuffle; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (canny)", + "filename": "control_v11p_sd15_canny_fp16.safetensors", + "name": "ControlNet-v1-1 (canny; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (depth)", + "filename": "control_v11f1p_sd15_depth_fp16.safetensors", + "name": "ControlNet-v1-1 (depth; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (inpaint)", + "filename": "control_v11p_sd15_inpaint_fp16.safetensors", + "name": "ControlNet-v1-1 (inpaint; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (lineart)", + "filename": "control_v11p_sd15_lineart_fp16.safetensors", + "name": "ControlNet-v1-1 (lineart; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (mlsd)", + "filename": "control_v11p_sd15_mlsd_fp16.safetensors", + "name": "ControlNet-v1-1 (mlsd; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (normalbae)", + "filename": "control_v11p_sd15_normalbae_fp16.safetensors", + "name": "ControlNet-v1-1 (normalbae; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (openpose)", + "filename": "control_v11p_sd15_openpose_fp16.safetensors", + "name": "ControlNet-v1-1 (openpose; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (scribble)", + "filename": "control_v11p_sd15_scribble_fp16.safetensors", + "name": "ControlNet-v1-1 (scribble; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_scribble_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (seg)", + "filename": "control_v11p_sd15_seg_fp16.safetensors", + "name": "ControlNet-v1-1 (seg; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_seg_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (softedge)", + "filename": "control_v11p_sd15_softedge_fp16.safetensors", + "name": "ControlNet-v1-1 (softedge; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_softedge_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (anime)", + "filename": "control_v11p_sd15s2_lineart_anime_fp16.safetensors", + "name": "ControlNet-v1-1 (anime; fp16)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (tile) / v11u", + "filename": "control_v11u_sd15_tile_fp16.safetensors", + "name": "ControlNet-v1-1 (tile; fp16; v11u)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11u_sd15_tile_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (tile) / v11f1e\nYou need to this model for Tiled Resample", + "filename": "control_v11f1e_sd15_tile_fp16.safetensors", + "name": "ControlNet-v1-1 (tile; fp16; v11f1e)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "This inpaint-depth controlnet model is specialized for the hand refiner.", + "filename": "control_sd15_inpaint_depth_hand_fp16.safetensors", + "name": "ControlNet-HandRefiner-pruned (inpaint-depth-hand; fp16)", + "reference": "https://huggingface.co/hr16/ControlNet-HandRefiner-pruned", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/hr16/ControlNet-HandRefiner-pruned/resolve/main/control_sd15_inpaint_depth_hand_fp16.safetensors" + }, + { + "base": "SD1.5", + "description": "Loose ControlNet model", + "filename": "control_boxdepth_LooseControlfp16.safetensors", + "name": "control_boxdepth_LooseControlfp16 (fp16)", + "reference": "https://huggingface.co/ioclab/LooseControl_WebUICombine", + "save_path": "default", + "type": "controlnet", + "url": "https://huggingface.co/ioclab/LooseControl_WebUICombine/resolve/main/control_boxdepth_LooseControlfp16.safetensors" + }, + { + "base": "SD1.5", + "description": "GLIGEN textbox model", + "filename": "gligen_sd14_textbox_pruned_fp16.safetensors", + "name": "GLIGEN textbox (fp16; pruned)", + "reference": "https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors", + "save_path": "default", + "type": "gligen", + "url": "https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors/resolve/main/gligen_sd14_textbox_pruned_fp16.safetensors" + }, + { + "base": "SAM", + "description": "Segmenty Anything SAM model (ViT-H)", + "filename": "sam_vit_h_4b8939.pth", + "name": "ViT-H SAM model", + "reference": "https://github.com/facebookresearch/segment-anything#model-checkpoints", + "save_path": "sams", + "type": "sam", + "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth" + }, + { + "base": "SAM", + "description": "Segmenty Anything SAM model (ViT-L)", + "filename": "sam_vit_l_0b3195.pth", + "name": "ViT-L SAM model", + "reference": "https://github.com/facebookresearch/segment-anything#model-checkpoints", + "save_path": "sams", + "type": "sam", + "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth" + }, + { + "base": "SAM", + "description": "Segmenty Anything SAM model (ViT-B)", + "filename": "sam_vit_b_01ec64.pth", + "name": "ViT-B SAM model", + "reference": "https://github.com/facebookresearch/segment-anything#model-checkpoints", + "save_path": "sams", + "type": "sam", + "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth" + }, + { + "base": "SEECODER", + "description": "SeeCoder model", + "filename": "seecoder-v1-0.safetensors", + "name": "seecoder v1.0", + "reference": "https://huggingface.co/shi-labs/prompt-free-diffusion/tree/main/pretrained/pfd/seecoder", + "save_path": "seecoders", + "type": "seecoder", + "url": "https://huggingface.co/shi-labs/prompt-free-diffusion/resolve/main/pretrained/pfd/seecoder/seecoder-v1-0.safetensors" + }, + { + "base": "SEECODER", + "description": "SeeCoder model", + "filename": "seecoder-pa-v1-0.safetensors", + "name": "seecoder pa v1.0", + "reference": "https://huggingface.co/shi-labs/prompt-free-diffusion/tree/main/pretrained/pfd/seecoder", + "save_path": "seecoders", + "type": "seecoder", + "url": "https://huggingface.co/shi-labs/prompt-free-diffusion/resolve/main/pretrained/pfd/seecoder/seecoder-pa-v1-0.safetensors" + }, + { + "base": "SEECODER", + "description": "SeeCoder model", + "filename": "seecoder-anime-v1-0.safetensors", + "name": "seecoder anime v1.0", + "reference": "https://huggingface.co/shi-labs/prompt-free-diffusion/tree/main/pretrained/pfd/seecoder", + "save_path": "seecoders", + "type": "seecoder", + "url": "https://huggingface.co/shi-labs/prompt-free-diffusion/resolve/main/pretrained/pfd/seecoder/seecoder-anime-v1-0.safetensors" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "face_yolov8m.pt", + "name": "face_yolov8m (bbox)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/bbox", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8m.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "face_yolov8n.pt", + "name": "face_yolov8n (bbox)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/bbox", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8n.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "face_yolov8n_v2.pt", + "name": "face_yolov8n_v2 (bbox)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/bbox", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8n_v2.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "face_yolov8s.pt", + "name": "face_yolov8s (bbox)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/bbox", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8s.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "hand_yolov8n.pt", + "name": "hand_yolov8n (bbox)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/bbox", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/hand_yolov8n.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "hand_yolov8s.pt", + "name": "hand_yolov8s (bbox)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/bbox", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/hand_yolov8s.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "person_yolov8m-seg.pt", + "name": "person_yolov8m (segm)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8m-seg.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "person_yolov8n-seg.pt", + "name": "person_yolov8n (segm)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8n-seg.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "person_yolov8s-seg.pt", + "name": "person_yolov8s (segm)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8s-seg.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "deepfashion2_yolov8s-seg.pt", + "name": "deepfashion2_yolov8s (segm)", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/deepfashion2_yolov8s-seg.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "face_yolov8m-seg_60.pt", + "name": "face_yolov8m-seg_60.pt (segm)", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/face_yolov8m-seg_60.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "face_yolov8n-seg2_60.pt", + "name": "face_yolov8n-seg2_60.pt (segm)", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/face_yolov8n-seg2_60.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "hair_yolov8n-seg_60.pt", + "name": "hair_yolov8n-seg_60.pt (segm)", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/hair_yolov8n-seg_60.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "skin_yolov8m-seg_400.pt", + "name": "skin_yolov8m-seg_400.pt (segm)", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8m-seg_400.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "skin_yolov8n-seg_400.pt", + "name": "skin_yolov8n-seg_400.pt (segm)", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8n-seg_400.pt" + }, + { + "base": "Ultralytics", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "filename": "skin_yolov8n-seg_800.pt", + "name": "skin_yolov8n-seg_800.pt (segm)", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "save_path": "ultralytics/segm", + "type": "Ultralytics", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8n-seg_800.pt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the ArtVentureX/AnimateDiff extension node.", + "filename": "mm_sd_v14.ckpt", + "name": "animatediff/mmd_sd_v14.ckpt (comfyui-animatediff) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "AnimateDiff", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the ArtVentureX/AnimateDiff extension node.", + "filename": "mm_sd_v15.ckpt", + "name": "animatediff/mm_sd_v15.ckpt (comfyui-animatediff) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "AnimateDiff", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "mm_sd_v14.ckpt", + "name": "animatediff/mmd_sd_v14.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "mm_sd_v15.ckpt", + "name": "animatediff/mm_sd_v15.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "mm_sd_v15_v2.ckpt", + "name": "animatediff/mm_sd_v15_v2.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v3_sd15_mm.ckpt", + "name": "animatediff/v3_sd15_mm.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_mm.ckpt" + }, + { + "base": "SDXL", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "mm_sdxl_v10_beta.ckpt", + "name": "animatediff/mm_sdxl_v10_beta.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sdxl_v10_beta.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "mm-Stabilized_high.pth", + "name": "AD_Stabilized_Motion/mm-Stabilized_high.pth (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/manshoety/AD_Stabilized_Motion", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/manshoety/AD_Stabilized_Motion/resolve/main/mm-Stabilized_high.pth" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "mm-Stabilized_mid.pth", + "name": "AD_Stabilized_Motion/mm-Stabilized_mid.pth (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/manshoety/AD_Stabilized_Motion", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/manshoety/AD_Stabilized_Motion/resolve/main/mm-Stabilized_mid.pth" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "temporaldiff-v1-animatediff.ckpt", + "name": "CiaraRowles/temporaldiff-v1-animatediff.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/CiaraRowles/TemporalDiff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/CiaraRowles/TemporalDiff/resolve/main/temporaldiff-v1-animatediff.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_PanLeft.ckpt", + "name": "animatediff/v2_lora_PanLeft.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_PanLeft.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_PanRight.ckpt", + "name": "animatediff/v2_lora_PanRight.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_PanRight.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_RollingAnticlockwise.ckpt", + "name": "animatediff/v2_lora_RollingAnticlockwise.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_RollingAnticlockwise.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_RollingClockwise.ckpt", + "name": "animatediff/v2_lora_RollingClockwise.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_RollingClockwise.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_TiltDown.ckpt", + "name": "animatediff/v2_lora_TiltDown.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltDown.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_TiltUp.ckpt", + "name": "animatediff/v2_lora_TiltUp.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltUp.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_ZoomIn.ckpt", + "name": "animatediff/v2_lora_ZoomIn.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_ZoomIn.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "v2_lora_ZoomOut.ckpt", + "name": "animatediff/v2_lora_ZoomOut.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "animatediff_motion_lora", + "type": "motion lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_ZoomOut.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "lt_long_mm_32_frames.ckpt", + "name": "LongAnimatediff/lt_long_mm_32_frames.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_32_frames.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "lt_long_mm_16_64_frames.ckpt", + "name": "LongAnimatediff/lt_long_mm_16_64_frames.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_16_64_frames.ckpt" + }, + { + "base": "SD1.x", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "filename": "lt_long_mm_16_64_frames_v1.1.ckpt", + "name": "LongAnimatediff/lt_long_mm_16_64_frames_v1.1.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "save_path": "animatediff_models", + "type": "animatediff", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_16_64_frames_v1.1.ckpt" + }, + { + "base": "SD1.x", + "description": "AnimateDiff SparseCtrl RGB ControlNet model", + "filename": "v3_sd15_sparsectrl_rgb.ckpt", + "name": "animatediff/v3_sd15_sparsectrl_rgb.ckpt (ComfyUI-AnimateDiff-Evolved)", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "controlnet/SD1.5/animatediff", + "type": "controlnet", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_sparsectrl_rgb.ckpt" + }, + { + "base": "SD1.x", + "description": "AnimateDiff SparseCtrl Scribble ControlNet model", + "filename": "v3_sd15_sparsectrl_scribble.ckpt", + "name": "animatediff/v3_sd15_sparsectrl_scribble.ckpt", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "controlnet/SD1.5/animatediff", + "type": "controlnet", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_sparsectrl_scribble.ckpt" + }, + { + "base": "SD1.x", + "description": "AnimateDiff Adapter LoRA (SD1.5)", + "filename": "v3_sd15_adapter.ckpt", + "name": "animatediff/v3_sd15_adapter.ckpt", + "reference": "https://huggingface.co/guoyww/animatediff", + "save_path": "loras/SD1.5/animatediff", + "type": "lora", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_adapter.ckpt" + }, + { + "base": "MotionCtrl", + "description": "To use the ComfyUI-MotionCtrl extension, downloading this model is required.", + "filename": "motionctrl.pth", + "name": "TencentARC/motionctrl.pth", + "reference": "https://huggingface.co/TencentARC/MotionCtrl", + "save_path": "checkpoints/motionctrl", + "type": "checkpoints", + "url": "https://huggingface.co/TencentARC/MotionCtrl/resolve/main/motionctrl.pth" + }, + { + "base": "SD1.5", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter_sd15.safetensors", + "name": "ip-adapter_sd15.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15.safetensors" + }, + { + "base": "SD1.5", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter_sd15_light.safetensors", + "name": "ip-adapter_sd15_light.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15_light.safetensors" + }, + { + "base": "SD1.5", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter_sd15_vit-G.safetensors", + "name": "ip-adapter_sd15_vit-G.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15_vit-G.safetensors" + }, + { + "base": "SD1.5", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter-plus_sd15.safetensors", + "name": "ip-adapter-plus_sd15.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus_sd15.safetensors" + }, + { + "base": "SD1.5", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter-plus-face_sd15.safetensors", + "name": "ip-adapter-plus-face_sd15.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus-face_sd15.safetensors" + }, + { + "base": "SD1.5", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter-full-face_sd15.safetensors", + "name": "ip-adapter-full-face_sd15.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-full-face_sd15.safetensors" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid_sd15.bin", + "name": "ip-adapter-faceid_sd15.bin", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sd15.bin" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID Plus Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid-plus_sd15.bin", + "name": "ip-adapter-faceid-plus_sd15.bin", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plus_sd15.bin" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID Portrait Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid-portrait_sd15.bin", + "name": "ip-adapter-faceid-portrait_sd15.bin", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-portrait_sd15.bin" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID Model (SDXL) [ipadapter]", + "filename": "ip-adapter-faceid_sdxl.bin", + "name": "ip-adapter-faceid_sdxl.bin", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sdxl.bin" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID Plus Model (SDXL) [ipadapter]", + "filename": "ip-adapter-faceid-plusv2_sdxl.bin", + "name": "ip-adapter-faceid-plusv2_sdxl.bin", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sdxl.bin" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID LoRA Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid_sd15_lora.safetensors", + "name": "ip-adapter-faceid_sd15_lora.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "loras/ipadapter", + "type": "lora", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sd15_lora.safetensors" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID Plus LoRA Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid-plus_sd15_lora.safetensors", + "name": "ip-adapter-faceid-plus_sd15_lora.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "loras/ipadapter", + "type": "lora", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plus_sd15_lora.safetensors" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID-Plus V2 Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid-plusv2_sd15.bin", + "name": "ip-adapter-faceid-plusv2_sd15.bin", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sd15.bin" + }, + { + "base": "SD1.5", + "description": "IP-Adapter-FaceID-Plus V2 LoRA Model (SD1.5) [ipadapter]", + "filename": "ip-adapter-faceid-plusv2_sd15_lora.safetensors", + "name": "ip-adapter-faceid-plusv2_sd15_lora.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "loras/ipadapter", + "type": "lora", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sd15_lora.safetensors" + }, + { + "base": "SDXL", + "description": "IP-Adapter-FaceID LoRA Model (SDXL) [ipadapter]", + "filename": "ip-adapter-faceid_sdxl_lora.safetensors", + "name": "ip-adapter-faceid_sdxl_lora.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "loras/ipadapter", + "type": "lora", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sdxl_lora.safetensors" + }, + { + "base": "SDXL", + "description": "IP-Adapter-FaceID-Plus V2 LoRA Model (SDXL) [ipadapter]", + "filename": "ip-adapter-faceid-plusv2_sdxl_lora.safetensors", + "name": "ip-adapter-faceid-plusv2_sdxl_lora.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "save_path": "loras/ipadapter", + "type": "lora", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sdxl_lora.safetensors" + }, + { + "base": "SDXL", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "filename": "ip-adapter_sdxl.safetensors", + "name": "ip-adapter_sdxl.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.safetensors" + }, + { + "base": "SDXL", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints [ipadapter]", + "filename": "ip-adapter_sdxl_vit-h.safetensors", + "name": "ip-adapter_sdxl_vit-h.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl_vit-h.safetensors" + }, + { + "base": "SDXL", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints [ipadapter]", + "filename": "ip-adapter-plus_sdxl_vit-h.safetensors", + "name": "ip-adapter-plus_sdxl_vit-h.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus_sdxl_vit-h.safetensors" + }, + { + "base": "SDXL", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints [ipadapter]", + "filename": "ip-adapter-plus-face_sdxl_vit-h.safetensors", + "name": "ip-adapter-plus-face_sdxl_vit-h.safetensors", + "reference": "https://huggingface.co/h94/IP-Adapter", + "save_path": "ipadapter", + "type": "IP-Adapter", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus-face_sdxl_vit-h.safetensors" + }, + { + "base": "SD1.5", + "description": "Pressing 'install' directly downloads the model from the pfg-ComfyUI/models extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "filename": "pfg-novel-n10.pt", + "name": "pfg-novel-n10.pt", + "reference": "https://huggingface.co/furusu/PFG", + "save_path": "custom_nodes/pfg-ComfyUI/models", + "type": "PFG", + "url": "https://huggingface.co/furusu/PFG/resolve/main/pfg-novel-n10.pt" + }, + { + "base": "SD1.5", + "description": "Pressing 'install' directly downloads the model from the pfg-ComfyUI/models extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "filename": "pfg-wd14-n10.pt", + "name": "pfg-wd14-n10.pt", + "reference": "https://huggingface.co/furusu/PFG", + "save_path": "custom_nodes/pfg-ComfyUI/models", + "type": "PFG", + "url": "https://huggingface.co/furusu/PFG/resolve/main/pfg-wd14-n10.pt" + }, + { + "base": "SD1.5", + "description": "Pressing 'install' directly downloads the model from the pfg-ComfyUI/models extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "filename": "pfg-wd15beta2-n10.pt", + "name": "pfg-wd15beta2-n10.pt", + "reference": "https://huggingface.co/furusu/PFG", + "save_path": "custom_nodes/pfg-ComfyUI/models", + "type": "PFG", + "url": "https://huggingface.co/furusu/PFG/resolve/main/pfg-wd15beta2-n10.pt" + }, + { + "base": "GFPGAN", + "description": "Face Restoration Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "filename": "GFPGANv1.4.pth", + "name": "GFPGANv1.4.pth", + "reference": "https://github.com/TencentARC/GFPGAN/releases", + "save_path": "facerestore_models", + "type": "GFPGAN", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" + }, + { + "base": "CodeFormer", + "description": "Face Restoration Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "filename": "codeformer.pth", + "name": "codeformer.pth", + "reference": "https://github.com/sczhou/CodeFormer/releases", + "save_path": "facerestore_models", + "type": "CodeFormer", + "url": "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth" + }, + { + "base": "facexlib", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "filename": "detection_Resnet50_Final.pth", + "name": "detection_Resnet50_Final.pth", + "reference": "https://github.com/xinntao/facexlib", + "save_path": "facerestore_models", + "type": "facexlib", + "url": "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" + }, + { + "base": "facexlib", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "filename": "detection_mobilenet0.25_Final.pth", + "name": "detection_mobilenet0.25_Final.pth", + "reference": "https://github.com/xinntao/facexlib", + "save_path": "facerestore_models", + "type": "facexlib", + "url": "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_mobilenet0.25_Final.pth" + }, + { + "base": "facexlib", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "filename": "yolov5l-face.pth", + "name": "yolov5l-face.pth", + "reference": "https://github.com/xinntao/facexlib", + "save_path": "facedetection", + "type": "facexlib", + "url": "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth" + }, + { + "base": "facexlib", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "filename": "yolov5n-face.pth", + "name": "yolov5n-face.pth", + "reference": "https://github.com/xinntao/facexlib", + "save_path": "facedetection", + "type": "facexlib", + "url": "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5n-face.pth" + }, + { + "base": "SDXL", + "description": "PhotoMaker model. This model is compatible with SDXL.", + "filename": "photomaker-v1.bin", + "name": "photomaker-v1.bin", + "reference": "https://huggingface.co/TencentARC/PhotoMaker", + "save_path": "photomaker", + "type": "photomaker", + "url": "https://huggingface.co/TencentARC/PhotoMaker/resolve/main/photomaker-v1.bin" + }, + { + "base": "inswapper", + "description": "Antelopev2 1k3d68.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "filename": "1k3d68.onnx", + "name": "1k3d68.onnx", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "save_path": "insightface/models/antelopev2", + "type": "insightface", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/1k3d68.onnx" + }, + { + "base": "inswapper", + "description": "Antelopev2 2d106det.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "filename": "2d106det.onnx", + "name": "2d106det.onnx", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "save_path": "insightface/models/antelopev2", + "type": "insightface", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/2d106det.onnx" + }, + { + "base": "inswapper", + "description": "Antelopev2 genderage.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "filename": "genderage.onnx", + "name": "genderage.onnx", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "save_path": "insightface/models/antelopev2", + "type": "insightface", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/genderage.onnx" + }, + { + "base": "inswapper", + "description": "Antelopev2 glintr100.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "filename": "glintr100.onnx", + "name": "glintr100.onnx", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "save_path": "insightface/models/antelopev2", + "type": "insightface", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/glintr100.onnx" + }, + { + "base": "inswapper", + "description": "Antelopev2 scrfd_10g_bnkps.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "filename": "scrfd_10g_bnkps.onnx", + "name": "scrfd_10g_bnkps.onnx", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "save_path": "insightface/models/antelopev2", + "type": "insightface", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/scrfd_10g_bnkps.onnx" + }, + { + "base": "SDXL", + "description": "InstantId main model based on IpAdapter", + "filename": "ip-adapter.bin", + "name": "ip-adapter.bin", + "reference": "https://huggingface.co/InstantX/InstantID", + "save_path": "instantid", + "type": "instantid", + "url": "https://huggingface.co/InstantX/InstantID/resolve/main/ip-adapter.bin" + }, + { + "base": "SDXL", + "description": "InstantId controlnet model", + "filename": "diffusion_pytorch_model.safetensors", + "name": "diffusion_pytorch_model.safetensors", + "reference": "https://huggingface.co/InstantX/InstantID", + "save_path": "controlnet/instantid", + "type": "controlnet", + "url": "https://huggingface.co/InstantX/InstantID/resolve/main/ControlNetModel/diffusion_pytorch_model.safetensors" + }, + { + "base": "efficient_sam", + "description": "Install efficient_sam_s_cpu.jit into ComfyUI-YoloWorld-EfficientSAM", + "filename": "efficient_sam_s_cpu.jit", + "name": "efficient_sam_s_cpu.jit [ComfyUI-YoloWorld-EfficientSAM]", + "reference": "https://huggingface.co/camenduru/YoloWorld-EfficientSAM/tree/main", + "save_path": "custom_nodes/ComfyUI-YoloWorld-EfficientSAM", + "type": "efficient_sam", + "url": "https://huggingface.co/camenduru/YoloWorld-EfficientSAM/resolve/main/efficient_sam_s_cpu.jit" + }, + { + "base": "efficient_sam", + "description": "Install efficient_sam_s_gpu.jit into ComfyUI-YoloWorld-EfficientSAM", + "filename": "efficient_sam_s_gpu.jit", + "name": "efficient_sam_s_gpu.jit [ComfyUI-YoloWorld-EfficientSAM]", + "reference": "https://huggingface.co/camenduru/YoloWorld-EfficientSAM/tree/main", + "save_path": "custom_nodes/ComfyUI-YoloWorld-EfficientSAM", + "type": "efficient_sam", + "url": "https://huggingface.co/camenduru/YoloWorld-EfficientSAM/resolve/main/efficient_sam_s_gpu.jit" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/LICENSE.txt b/custom_nodes/ComfyUI-Manager/LICENSE.txt new file mode 100644 index 0000000000000000000000000000000000000000..f288702d2fa16d3cdf0035b15a9fcbc552cd88e7 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/LICENSE.txt @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/custom_nodes/ComfyUI-Manager/README.md b/custom_nodes/ComfyUI-Manager/README.md new file mode 100644 index 0000000000000000000000000000000000000000..f1db7b039852f1db8296880b32ece79fe547590f --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/README.md @@ -0,0 +1,320 @@ +# ComfyUI Manager + +**ComfyUI-Manager** is an extension designed to enhance the usability of [ComfyUI](https://github.com/comfyanonymous/ComfyUI). It offers management functions to **install, remove, disable, and enable** various custom nodes of ComfyUI. Furthermore, this extension provides a hub feature and convenience functions to access a wide range of information within ComfyUI. + +![menu](misc/menu.jpg) + +## NOTICE +* 🏆 Join us for the [ComfyUI Workflow Contest](https://contest.openart.ai/), hosted by OpenArt AI (11.27.2023 - 12.15.2023). Our esteemed judge panel includes Scott E. Detweiler, Olivio Sarikas, MERJIC麦橘, among others. We're also thrilled to have the authors of ComfyUI Manager and AnimateDiff as our special guests! +* If you wish to hide the "Share" button, click "Manager" and choose "Share: None" option. +* You can see whole nodes info on [ComfyUI Nodes Info](https://ltdrdata.github.io/) page. +* Versions prior to V0.22.2 will no longer detect missing nodes unless using a local database. Please update ComfyUI-Manager to the latest version. + +## Installation + +### Installation[method1] (General installation method: ComfyUI-Manager only) + +To install ComfyUI-Manager in addition to an existing installation of ComfyUI, you can follow the following steps: + +1. goto `ComfyUI/custom_nodes` dir in terminal(cmd) +2. `git clone https://github.com/ltdrdata/ComfyUI-Manager.git` +3. Restart ComfyUI + + +### Installation[method2] (Installation for portable ComfyUI version: ComfyUI-Manager only) +1. install git +- https://git-scm.com/download/win +- standalone version +- select option: use windows default console window +2. Download [scripts/install-manager-for-portable-version.bat](https://github.com/ltdrdata/ComfyUI-Manager/raw/main/scripts/install-manager-for-portable-version.bat) into installed `"ComfyUI_windows_portable"` directory +3. double click `install-manager-for-portable-version.bat` batch file + +![portable-install](misc/portable-install.png) + + +### Installation[method3] (Installation for linux+venv: ComfyUI + ComfyUI-Manager) + +To install ComfyUI with ComfyUI-Manager on Linux using a venv environment, you can follow these steps: +prerequisite: python-is-python3, python3-venv + +1. Download [scripts/install-comfyui-venv-linux.sh](https://github.com/ltdrdata/ComfyUI-Manager/raw/main/scripts/install-comfyui-venv-linux.sh) into empty install directory +- ComfyUI will be installed in the subdirectory of the specified directory, and the directory will contain the generated executable script. +2. `chmod +x install-comfyui-venv-linux.sh` +3. `./install-comfyui-venv-linux.sh` + +### Installation Precautions +* **DO**: `ComfyUI-Manager` files must be accurately located in the path `ComfyUI/custom_nodes/ComfyUI-Manager` + * Installing in a compressed file format is not recommended. +* **DON'T**: Decompress directly into the `ComfyUI/custom_nodes` location, resulting in the Manager contents like `__init__.py` being placed directly in that directory. + * You have to remove all ComfyUI-Manager files from `ComfyUI/custom_nodes` +* **DON'T**: In a form where decompression occurs in a path such as `ComfyUI/custom_nodes/ComfyUI-Manager/ComfyUI-Manager`. + * You have to move `ComfyUI/custom_nodes/ComfyUI-Manager/ComfyUI-Manager` to `ComfyUI/custom_nodes/ComfyUI-Manager` +* **DON'T**: In a form where decompression occurs in a path such as `ComfyUI/custom_nodes/ComfyUI-Manager-main`. + * In such cases, `ComfyUI-Manager` may operate, but it won't be recognized within `ComfyUI-Manager`, and updates cannot be performed. It also poses the risk of duplicate installations. + * You have to rename `ComfyUI/custom_nodes/ComfyUI-Manager-main` to `ComfyUI/custom_nodes/ComfyUI-Manager` + + +You can execute ComfyUI by running either `./run_gpu.sh` or `./run_cpu.sh` depending on your system configuration. + +## Colab Notebook +This repository provides Colab notebooks that allow you to install and use ComfyUI, including ComfyUI-Manager. To use ComfyUI, [click on this link](https://colab.research.google.com/github/ltdrdata/ComfyUI-Manager/blob/main/notebooks/comfyui_colab_with_manager.ipynb). +* Support for installing ComfyUI +* Support for basic installation of ComfyUI-Manager +* Support for automatically installing dependencies of custom nodes upon restarting Colab notebooks. + +## Changes +* **2.4** Copy the connections of the nearest node by double-clicking. +* **2.2.3** Support Components System +* **0.29** Add `Update all` feature +* **0.25** support db channel + * You can directly modify the db channel settings in the `config.ini` file. + * If you want to maintain a new DB channel, please modify the `channels.list` and submit a PR. +* **0.23** support multiple selection +* **0.18.1** `skip update check` feature added. + * A feature that allows quickly opening windows in environments where update checks take a long time. +* **0.17.1** Bug fix for the issue where enable/disable of the web extension was not working. Compatibility patch for StableSwarmUI. + * Requires latest version of ComfyUI (Revision: 1240) +* **0.17** Support preview method setting feature. +* **0.14** Support robust update. +* **0.13** Support additional 'pip' section for install spec. +* **0.12** Better installation support for Windows. +* **0.9** Support keyword search in installer menu. +* **V0.7.1** Bug fix for the issue where updates were not being applied on Windows. + * **For those who have been using versions 0.6, please perform a manual git pull in the custom_nodes/ComfyUI-Manager directory.** +* **V0.7** To address the issue of a slow list refresh, separate the fetch update and update check processes. +* **V0.6** Support extension installation for missing nodes. +* **V0.5** Removed external git program dependencies. + + +## How To Use + +1. Click "Manager" button on main menu + + ![mainmenu](misc/main.jpg) + + +2. If you click on 'Install Custom Nodes' or 'Install Models', an installer dialog will open. + + ![menu](misc/menu.jpg) + + * There are three DB modes: `DB: Channel (1day cache)`, `DB: Local`, and `DB: Channel (remote)`. + * `Channel (1day cache)` utilizes Channel cache information with a validity period of one day to quickly display the list. + * This information will be updated when there is no cache, when the cache expires, or when external information is retrieved through the Channel (remote). + * Whenever you start ComfyUI anew, this mode is always set as the **default** mode. + * `Local` uses information stored locally in ComfyUI-Manager. + * This information will be updated only when you update ComfyUI-Manager. + * For custom node developers, they should use this mode when registering their nodes in `custom-node-list.json` and testing them. + * `Channel (remote)` retrieves information from the remote channel, always displaying the latest list. + * In cases where retrieval is not possible due to network errors, it will forcibly use local information. + + * The ```Fetch Updates``` menu retrieves update data for custom nodes locally. Actual updates are applied by clicking the ```Update``` button in the ```Install Custom Nodes``` menu. + +3. Click 'Install' or 'Try Install' button. + + ![node-install-dialog](misc/custom-nodes.jpg) + + ![model-install-dialog](misc/models.png) + + * Installed: This item is already installed. + * Install: Clicking this button will install the item. + * Try Install: This is a custom node of which installation information cannot be confirmed. Click the button to try installing it. + + * If a red background `Channel` indicator appears at the top, it means it is not the default channel. Since the amount of information held is different from the default channel, many custom nodes may not appear in this channel state. + * Channel settings have a broad impact, affecting not only the node list but also all functions like "Update all." + * Conflicted Nodes with a yellow background show a list of nodes conflicting with other extensions in the respective extension. This issue needs to be addressed by the developer, and users should be aware that due to these conflicts, some nodes may not function correctly and may need to be installed accordingly. + +4. If you set the `Badge:` item in the menu as `Badge: Nickname`, `Badge: Nickname (hide built-in)`, `Badge: #ID Nickname`, `Badge: #ID Nickname (hide built-in)` the information badge will be displayed on the node. + * When selecting (hide built-in), it hides the 🦊 icon, which signifies built-in nodes. + * Nodes without any indication on the badge are custom nodes that Manager cannot recognize. + * `Badge: Nickname` displays the nickname of custom nodes, while `Badge: #ID Nickname` also includes the internal ID of the node. + + ![model-install-dialog](misc/nickname.jpg) + + +5. Share + ![menu](misc/main.jpg) ![share](misc/share.jpg) + + * You can share the workflow by clicking the Share button at the bottom of the main menu or selecting Share Output from the Context Menu of the Image node. + * Currently, it supports sharing via [https://comfyworkflows.com/](https://comfyworkflows.com/), + [https://openart.ai](https://openart.ai/workflows/dev), [https://youml.com](https://youml.com) + as well as through the Matrix channel. + + ![menu](misc/share-setting.jpg) + + * Through the Share settings in the Manager menu, you can configure the behavior of the Share button in the Main menu or Share Ouput button on Context Menu. + * `None`: hide from Main menu + * `All`: Show a dialog where the user can select a title for sharing. + + +## Snapshot-Manager +* When you press `Save snapshot` or use `Update All` on `Manager Menu`, the current installation status snapshot is saved. + * Snapshot file dir: `ComfyUI-Manager/snapshots` + * You can rename snapshot file. +* Press the "Restore" button to revert to the installation status of the respective snapshot. + * However, for custom nodes not managed by Git, snapshot support is incomplete. +* When you press `Restore`, it will take effect on the next ComfyUI startup. + + +![model-install-dialog](misc/snapshot.jpg) + +## How to register your custom node into ComfyUI-Manager + +* Add an entry to `custom-node-list.json` located in the root of ComfyUI-Manager and submit a Pull Request. +* NOTE: Before submitting the PR after making changes, please check `Use local DB` and ensure that the extension list loads without any issues in the `Install custom nodes` dialog. Occasionally, missing or extra commas can lead to JSON syntax errors. +* The remaining JSON will be updated through scripts in the future, so you don't need to worry about it. + +## Custom node support guide + +* Currently, the system operates by cloning the git repository and sequentially installing the dependencies listed in requirements.txt using pip, followed by invoking the install.py script. In the future, we plan to discuss and determine the specifications for supporting custom nodes. + +* Please submit a pull request to update either the custom-node-list.json or model-list.json file. + +* The scanner currently provides a detection function for missing nodes, which is capable of detecting nodes described by the following two patterns. + * Or you can provide manually `node_list.json` file. + +``` +NODE_CLASS_MAPPINGS = { + "ExecutionSwitch": ExecutionSwitch, + "ExecutionBlocker": ExecutionBlocker, + ... +} + +NODE_CLASS_MAPPINGS.update({ + "UniFormer-SemSegPreprocessor": Uniformer_SemSegPreprocessor, + "SemSegPreprocessor": Uniformer_SemSegPreprocessor, +}) +``` + +* When you write a docstring in the header of the .py file for the Node as follows, it will be used for managing the database in the Manager. + * Currently, only the `nickname` is being used, but other parts will also be utilized in the future. + * The `nickname` will be the name displayed on the badge of the node. + * If there is no `nickname`, it will be truncated to 20 characters from the arbitrarily written title and used. +``` +""" +@author: Dr.Lt.Data +@title: Impact Pack +@nickname: Impact Pack +@description: This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler. +""" +``` + + +* **Special purpose files** (optional) + * `node_list.json` - When your custom nodes pattern of NODE_CLASS_MAPPINGS is not conventional, it is used to manually provide a list of nodes for reference. ([example](https://github.com/melMass/comfy_mtb/raw/main/node_list.json)) + * `requirements.txt` - When installing, this pip requirements will be installed automatically + * `install.py` - When installing, it is automatically called + * `uninstall.py` - When uninstalling, it is automatically called + * `disable.py` - When disabled, it is automatically called + * When installing a custom node setup `.js` file, it is recommended to write this script for disabling. + * `enable.py` - When enabled, it is automatically called + * **All scripts are executed from the root path of the corresponding custom node.** + + +## Component Sharing +* **Copy & Paste** + * [Demo Page](https://ltdrdata.github.io/component-demo/) + * When pasting a component from the clipboard, it supports text in the following JSON format. (text/plain) + ``` + { + "kind": "ComfyUI Components", + "timestamp": , + "components": + { + : + } + } + ``` + * `` Ensure that the timestamp is always unique. + * "components" should have the same structure as the content of the file stored in ComfyUI-Manager/components. + * ``: The name should be in the format `::`. + * ``: In the nodedata of the group node. + * ``: Only two formats are allowed: `major.minor.patch` or `major.minor`. (e.g. `1.0`, `2.2.1`) + * ``: Saved time + * ``: If the packname is not empty, the category becomes packname/workflow, and it is saved in the .pack file in ComfyUI-Manager/components. + * ``: If there is neither a category nor a packname, it is saved in the components category. + ``` + "version":"1.0", + "datetime": 1705390656516, + "packname": "mypack", + "category": "util/pipe", + ``` +* **Drag & Drop** + * Dragging and dropping a `.pack` or `.json` file will add the corresponding components. + * Example pack: [Impact.pack](misc/Impact.pack) + +* Dragging and dropping or pasting a single component will add a node. However, when adding multiple components, nodes will not be added. + + +## Support of missing nodes installation + +![missing-menu](misc/missing-menu.png) + +* When you click on the ```Install Missing Custom Nodes``` button in the menu, it displays a list of extension nodes that contain nodes not currently present in the workflow. + +![missing-list](misc/missing-list.png) + + +## Additional Feature +* Logging to file feature + * This feature is enabled by default and can be disabled by setting `file_logging = False` in the `config.ini`. + +* Fix node(recreate): When right-clicking on a node and selecting `Fix node (recreate)`, you can recreate the node. The widget's values are reset, while the connections maintain those with the same names. + * It is used to correct errors in nodes of old workflows created before, which are incompatible with the version changes of custom nodes. + +* Double-Click Node Title: You can set the double click behavior of nodes in the ComfyUI-Manager menu. + * `Copy All Connections`, `Copy Input Connections`: Double-clicking a node copies the connections of the nearest node. + * This action targets the nearest node within a straight-line distance of 1000 pixels from the center of the node. + * In the case of `Copy All Connections`, it duplicates existing outputs, but since it does not allow duplicate connections, the existing output connections of the original node are disconnected. + * This feature copies only the input and output that match the names. + + * `Possible Input Connections`: It connects all outputs that match the closest type within the specified range. + * This connection links to the closest outputs among the nodes located on the left side of the target node. + + * `Possible(left) + Copy(right)`: When you Double-Click on the left half of the title, it operates as `Possible Input Connections`, and when you Double-Click on the right half, it operates as `Copy All Connections`. + +## Troubleshooting +* If your `git.exe` is installed in a specific location other than system git, please install ComfyUI-Manager and run ComfyUI. Then, specify the path including the file name in `git_exe = ` in the ComfyUI-Manager/config.ini file that is generated. +* If updating ComfyUI-Manager itself fails, please go to the **ComfyUI-Manager** directory and execute the command `git update-ref refs/remotes/origin/main a361cc1 && git fetch --all && git pull`. + * Alternatively, download the update-fix.py script from [update-fix.py](https://github.com/ltdrdata/ComfyUI-Manager/raw/main/scripts/update-fix.py) and place it in the ComfyUI-Manager directory. Then, run it using your Python command. + For the portable version, use `..\..\..\python_embeded\python.exe update-fix.py`. +* For cases where nodes like `PreviewTextNode` from `ComfyUI_Custom_Nodes_AlekPet` are only supported as front-end nodes, we currently do not provide missing nodes for them. +* Currently, `vid2vid` is not being updated, causing compatibility issues. +* If you encounter the error message `Overlapped Object has pending operation at deallocation on Comfyui Manager load` under Windows + * Edit `config.ini` file: add `windows_selector_event_loop_policy = True` + + +## TODO: Unconventional form of custom node list + +* https://github.com/diontimmer/Sample-Diffusion-ComfyUI-Extension +* https://github.com/senshilabs/NINJA-plugin +* https://github.com/MockbaTheBorg/Nodes +* https://github.com/StartHua/Comfyui_GPT_Story +* https://github.com/NielsGercama/comfyui_customsampling +* https://github.com/wrightdaniel2017/ComfyUI-VideoLipSync + + +## Roadmap + +- [x] System displaying information about failed custom nodes import. +- [x] Guide for missing nodes in ComfyUI vanilla nodes. +- [x] Collision checking system for nodes with the same ID across extensions. +- [x] Template sharing system. (-> Component system based on Group Nodes) +- [x] 3rd party API system. +- [ ] Auto migration for custom nodes with changed structures. +- [ ] Version control feature for nodes. +- [ ] List of currently used custom nodes. +- [ ] Download support multiple model download. +- [ ] Model download via url. +- [ ] List sorting. +- [ ] Provides description of node. + + +# Disclaimer + +* This extension simply provides the convenience of installing custom nodes and does not guarantee their proper functioning. + + +## Credit +ComfyUI/[ComfyUI](https://github.com/comfyanonymous/ComfyUI) - A powerful and modular stable diffusion GUI. + +**And, for all ComfyUI custom node developers** diff --git a/custom_nodes/ComfyUI-Manager/__init__.py b/custom_nodes/ComfyUI-Manager/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..65d4625ad8931d8f2e7df48790e377af7ef1a06b --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/__init__.py @@ -0,0 +1,2376 @@ +import configparser +import mimetypes +import shutil +import traceback + +import folder_paths +import os +import sys +import threading +import locale +import subprocess # don't remove this +from tqdm.auto import tqdm +import concurrent +from urllib.parse import urlparse +import http.client +import re +import nodes +import hashlib +from datetime import datetime + + +try: + import cm_global +except: + glob_path = os.path.join(os.path.dirname(__file__), "glob") + sys.path.append(glob_path) + import cm_global + + print(f"[WARN] ComfyUI-Manager: Your ComfyUI version is outdated. Please update to the latest version.") + + +version = [2, 7, 2] +version_str = f"V{version[0]}.{version[1]}" + (f'.{version[2]}' if len(version) > 2 else '') +print(f"### Loading: ComfyUI-Manager ({version_str})") + + +comfy_ui_hash = "-" + +cache_lock = threading.Lock() + + +def handle_stream(stream, prefix): + stream.reconfigure(encoding=locale.getpreferredencoding(), errors='replace') + for msg in stream: + if prefix == '[!]' and ('it/s]' in msg or 's/it]' in msg) and ('%|' in msg or 'it [' in msg): + if msg.startswith('100%'): + print('\r' + msg, end="", file=sys.stderr), + else: + print('\r' + msg[:-1], end="", file=sys.stderr), + else: + if prefix == '[!]': + print(prefix, msg, end="", file=sys.stderr) + else: + print(prefix, msg, end="") + + +def run_script(cmd, cwd='.'): + if len(cmd) > 0 and cmd[0].startswith("#"): + print(f"[ComfyUI-Manager] Unexpected behavior: `{cmd}`") + return 0 + + process = subprocess.Popen(cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1) + + stdout_thread = threading.Thread(target=handle_stream, args=(process.stdout, "")) + stderr_thread = threading.Thread(target=handle_stream, args=(process.stderr, "[!]")) + + stdout_thread.start() + stderr_thread.start() + + stdout_thread.join() + stderr_thread.join() + + return process.wait() + + +try: + import git +except: + my_path = os.path.dirname(__file__) + requirements_path = os.path.join(my_path, "requirements.txt") + + print(f"## ComfyUI-Manager: installing dependencies") + + run_script([sys.executable, '-s', '-m', 'pip', 'install', '-r', requirements_path]) + + try: + import git + except: + print(f"## [ERROR] ComfyUI-Manager: Attempting to reinstall dependencies using an alternative method.") + run_script([sys.executable, '-s', '-m', 'pip', 'install', '--user', '-r', requirements_path]) + + try: + import git + except: + print(f"## [ERROR] ComfyUI-Manager: Failed to install the GitPython package in the correct Python environment. Please install it manually in the appropriate environment. (You can seek help at https://app.element.io/#/room/%23comfyui_space%3Amatrix.org)") + + print(f"## ComfyUI-Manager: installing dependencies done.") + + +from git.remote import RemoteProgress + +sys.path.append('../..') + +from torchvision.datasets.utils import download_url + +comfy_ui_required_revision = 1930 +comfy_ui_required_commit_datetime = datetime(2024, 1, 24, 0, 0, 0) + +comfy_ui_revision = "Unknown" +comfy_ui_commit_datetime = datetime(1900, 1, 1, 0, 0, 0) + +comfy_path = os.path.dirname(folder_paths.__file__) +custom_nodes_path = os.path.join(comfy_path, 'custom_nodes') +js_path = os.path.join(comfy_path, "web", "extensions") + +comfyui_manager_path = os.path.dirname(__file__) +cache_dir = os.path.join(comfyui_manager_path, '.cache') +local_db_model = os.path.join(comfyui_manager_path, "model-list.json") +local_db_alter = os.path.join(comfyui_manager_path, "alter-list.json") +local_db_custom_node_list = os.path.join(comfyui_manager_path, "custom-node-list.json") +local_db_extension_node_mappings = os.path.join(comfyui_manager_path, "extension-node-map.json") +git_script_path = os.path.join(os.path.dirname(__file__), "git_helper.py") +components_path = os.path.join(comfyui_manager_path, 'components') + +startup_script_path = os.path.join(comfyui_manager_path, "startup-scripts") +config_path = os.path.join(os.path.dirname(__file__), "config.ini") +cached_config = None + +channel_list_path = os.path.join(comfyui_manager_path, 'channels.list') +channel_dict = None +channel_list = None + +from comfy.cli_args import args +import latent_preview + + +def get_channel_dict(): + global channel_dict + + if channel_dict is None: + channel_dict = {} + + if not os.path.exists(channel_list_path): + shutil.copy(channel_list_path+'.template', channel_list_path) + + with open(os.path.join(comfyui_manager_path, 'channels.list'), 'r') as file: + channels = file.read() + for x in channels.split('\n'): + channel_info = x.split("::") + if len(channel_info) == 2: + channel_dict[channel_info[0]] = channel_info[1] + + return channel_dict + + +def get_channel_list(): + global channel_list + + if channel_list is None: + channel_list = [] + for k, v in get_channel_dict().items(): + channel_list.append(f"{k}::{v}") + + return channel_list + + +def write_config(): + config = configparser.ConfigParser() + config['default'] = { + 'preview_method': get_current_preview_method(), + 'badge_mode': get_config()['badge_mode'], + 'git_exe': get_config()['git_exe'], + 'channel_url': get_config()['channel_url'], + 'share_option': get_config()['share_option'], + 'bypass_ssl': get_config()['bypass_ssl'], + "file_logging": get_config()['file_logging'], + 'default_ui': get_config()['default_ui'], + 'component_policy': get_config()['component_policy'], + 'double_click_policy': get_config()['double_click_policy'], + 'windows_selector_event_loop_policy': get_config()['windows_selector_event_loop_policy'], + } + with open(config_path, 'w') as configfile: + config.write(configfile) + + +def read_config(): + try: + config = configparser.ConfigParser() + config.read(config_path) + default_conf = config['default'] + + return { + 'preview_method': default_conf['preview_method'] if 'preview_method' in default_conf else get_current_preview_method(), + 'badge_mode': default_conf['badge_mode'] if 'badge_mode' in default_conf else 'none', + 'git_exe': default_conf['git_exe'] if 'git_exe' in default_conf else '', + 'channel_url': default_conf['channel_url'] if 'channel_url' in default_conf else 'https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main', + 'share_option': default_conf['share_option'] if 'share_option' in default_conf else 'all', + 'bypass_ssl': default_conf['bypass_ssl'] if 'bypass_ssl' in default_conf else False, + 'file_logging': default_conf['file_logging'] if 'file_logging' in default_conf else True, + 'default_ui': default_conf['default_ui'] if 'default_ui' in default_conf else 'none', + 'component_policy': default_conf['component_policy'] if 'component_policy' in default_conf else 'workflow', + 'double_click_policy': default_conf['double_click_policy'] if 'double_click_policy' in default_conf else 'copy-all', + 'windows_selector_event_loop_policy': default_conf['windows_selector_event_loop_policy'] if 'windows_selector_event_loop_policy' in default_conf else False, + } + + except Exception: + return { + 'preview_method': get_current_preview_method(), + 'badge_mode': 'none', + 'git_exe': '', + 'channel_url': 'https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main', + 'share_option': 'all', + 'bypass_ssl': False, + 'file_logging': True, + 'default_ui': 'none', + 'component_policy': 'workflow', + 'double_click_policy': 'copy-all', + 'windows_selector_event_loop_policy': False + } + + +def get_config(): + global cached_config + + if cached_config is None: + cached_config = read_config() + + return cached_config + + +def get_current_preview_method(): + if args.preview_method == latent_preview.LatentPreviewMethod.Auto: + return "auto" + elif args.preview_method == latent_preview.LatentPreviewMethod.Latent2RGB: + return "latent2rgb" + elif args.preview_method == latent_preview.LatentPreviewMethod.TAESD: + return "taesd" + else: + return "none" + + +def set_preview_method(method): + if method == 'auto': + args.preview_method = latent_preview.LatentPreviewMethod.Auto + elif method == 'latent2rgb': + args.preview_method = latent_preview.LatentPreviewMethod.Latent2RGB + elif method == 'taesd': + args.preview_method = latent_preview.LatentPreviewMethod.TAESD + else: + args.preview_method = latent_preview.LatentPreviewMethod.NoPreviews + + get_config()['preview_method'] = args.preview_method + + +set_preview_method(get_config()['preview_method']) + + +def set_badge_mode(mode): + get_config()['badge_mode'] = mode + + +def set_default_ui_mode(mode): + get_config()['default_ui'] = mode + + +def set_component_policy(mode): + get_config()['component_policy'] = mode + + +def set_double_click_policy(mode): + get_config()['double_click_policy'] = mode + + +def try_install_script(url, repo_path, install_cmd): + if platform.system() == "Windows" and comfy_ui_commit_datetime.date() >= comfy_ui_required_commit_datetime.date(): + if not os.path.exists(startup_script_path): + os.makedirs(startup_script_path) + + script_path = os.path.join(startup_script_path, "install-scripts.txt") + with open(script_path, "a") as file: + obj = [repo_path] + install_cmd + file.write(f"{obj}\n") + + return True + else: + print(f"\n## ComfyUI-Manager: EXECUTE => {install_cmd}") + code = run_script(install_cmd, cwd=repo_path) + + if platform.system() == "Windows": + try: + if comfy_ui_commit_datetime.date() < comfy_ui_required_commit_datetime.date(): + print("\n\n###################################################################") + print(f"[WARN] ComfyUI-Manager: Your ComfyUI version ({comfy_ui_revision})[{comfy_ui_commit_datetime.date()}] is too old. Please update to the latest version.") + print(f"[WARN] The extension installation feature may not work properly in the current installed ComfyUI version on Windows environment.") + print("###################################################################\n\n") + except: + pass + + if code != 0: + if url is None: + url = os.path.dirname(repo_path) + print(f"install script failed: {url}") + return False + +def print_comfyui_version(): + global comfy_ui_revision + global comfy_ui_commit_datetime + global comfy_ui_hash + + is_detached = False + try: + repo = git.Repo(os.path.dirname(folder_paths.__file__)) + comfy_ui_revision = len(list(repo.iter_commits('HEAD'))) + + comfy_ui_hash = repo.head.commit.hexsha + cm_global.variables['comfyui.revision'] = comfy_ui_revision + + comfy_ui_commit_datetime = repo.head.commit.committed_datetime + cm_global.variables['comfyui.commit_datetime'] = comfy_ui_commit_datetime + + is_detached = repo.head.is_detached + current_branch = repo.active_branch.name + + try: + if comfy_ui_commit_datetime.date() < comfy_ui_required_commit_datetime.date(): + print(f"\n\n## [WARN] ComfyUI-Manager: Your ComfyUI version ({comfy_ui_revision})[{comfy_ui_commit_datetime.date()}] is too old. Please update to the latest version. ##\n\n") + except: + pass + + # process on_revision_detected --> + if 'cm.on_revision_detected_handler' in cm_global.variables: + for k, f in cm_global.variables['cm.on_revision_detected_handler']: + try: + f(comfy_ui_revision) + except Exception: + print(f"[ERROR] '{k}' on_revision_detected_handler") + traceback.print_exc() + + del cm_global.variables['cm.on_revision_detected_handler'] + else: + print(f"[ComfyUI-Manager] Some features are restricted due to your ComfyUI being outdated.") + # <-- + + if current_branch == "master": + print(f"### ComfyUI Revision: {comfy_ui_revision} [{comfy_ui_hash[:8]}] | Released on '{comfy_ui_commit_datetime.date()}'") + else: + print(f"### ComfyUI Revision: {comfy_ui_revision} on '{current_branch}' [{comfy_ui_hash[:8]}] | Released on '{comfy_ui_commit_datetime.date()}'") + except: + if is_detached: + print(f"### ComfyUI Revision: {comfy_ui_revision} [{comfy_ui_hash[:8]}] *DETACHED | Released on '{comfy_ui_commit_datetime.date()}'") + else: + print("### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)") + + +print_comfyui_version() + + +# use subprocess to avoid file system lock by git (Windows) +def __win_check_git_update(path, do_fetch=False, do_update=False): + if do_fetch: + command = [sys.executable, git_script_path, "--fetch", path] + elif do_update: + command = [sys.executable, git_script_path, "--pull", path] + else: + command = [sys.executable, git_script_path, "--check", path] + + process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + output, _ = process.communicate() + output = output.decode('utf-8').strip() + + if 'detected dubious' in output: + # fix and try again + safedir_path = path.replace('\\', '/') + try: + print(f"[ComfyUI-Manager] Try fixing 'dubious repository' error on '{safedir_path}' repo") + process = subprocess.Popen(['git', 'config', '--global', '--add', 'safe.directory', safedir_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE) + output, _ = process.communicate() + + process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + output, _ = process.communicate() + output = output.decode('utf-8').strip() + except Exception: + print(f'[ComfyUI-Manager] failed to fixing') + + if 'detected dubious' in output: + print(f'\n[ComfyUI-Manager] Failed to fixing repository setup. Please execute this command on cmd: \n' + f'-----------------------------------------------------------------------------------------\n' + f'git config --global --add safe.directory "{safedir_path}"\n' + f'-----------------------------------------------------------------------------------------\n') + + if do_update: + if "CUSTOM NODE PULL: Success" in output: + process.wait() + print(f"\rUpdated: {path}") + return True, True # updated + elif "CUSTOM NODE PULL: None" in output: + process.wait() + return False, True # there is no update + else: + print(f"\rUpdate error: {path}") + process.wait() + return False, False # update failed + else: + if "CUSTOM NODE CHECK: True" in output: + process.wait() + return True, True + elif "CUSTOM NODE CHECK: False" in output: + process.wait() + return False, True + else: + print(f"\rFetch error: {path}") + print(f"\n{output}\n") + process.wait() + return False, True + + +def __win_check_git_pull(path): + command = [sys.executable, git_script_path, "--pull", path] + process = subprocess.Popen(command) + process.wait() + + +def switch_to_default_branch(repo): + show_result = repo.git.remote("show", "origin") + matches = re.search(r"\s*HEAD branch:\s*(.*)", show_result) + if matches: + default_branch = matches.group(1) + repo.git.checkout(default_branch) + + +def git_repo_has_updates(path, do_fetch=False, do_update=False): + if do_fetch: + print(f"\x1b[2K\rFetching: {path}", end='') + elif do_update: + print(f"\x1b[2K\rUpdating: {path}", end='') + + # Check if the path is a git repository + if not os.path.exists(os.path.join(path, '.git')): + raise ValueError('Not a git repository') + + if platform.system() == "Windows": + updated, success = __win_check_git_update(path, do_fetch, do_update) + if updated and success: + execute_install_script(None, path, lazy_mode=True) + return updated, success + else: + # Fetch the latest commits from the remote repository + repo = git.Repo(path) + + remote_name = 'origin' + remote = repo.remote(name=remote_name) + + # Get the current commit hash + commit_hash = repo.head.commit.hexsha + + if do_fetch or do_update: + remote.fetch() + + if do_update: + if repo.head.is_detached: + switch_to_default_branch(repo) + + current_branch = repo.active_branch + branch_name = current_branch.name + remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha + + if commit_hash == remote_commit_hash: + repo.close() + return False, True + + try: + remote.pull() + repo.git.submodule('update', '--init', '--recursive') + new_commit_hash = repo.head.commit.hexsha + + if commit_hash != new_commit_hash: + execute_install_script(None, path) + print(f"\x1b[2K\rUpdated: {path}") + return True, True + else: + return False, False + + except Exception as e: + print(f"\nUpdating failed: {path}\n{e}", file=sys.stderr) + return False, False + + if repo.head.is_detached: + repo.close() + return True, True + + # Get commit hash of the remote branch + current_branch = repo.active_branch + branch_name = current_branch.name + + remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha + + # Compare the commit hashes to determine if the local repository is behind the remote repository + if commit_hash != remote_commit_hash: + # Get the commit dates + commit_date = repo.head.commit.committed_datetime + remote_commit_date = repo.refs[f'{remote_name}/{branch_name}'].object.committed_datetime + + # Compare the commit dates to determine if the local repository is behind the remote repository + if commit_date < remote_commit_date: + repo.close() + return True, True + + repo.close() + + return False, True + + +def git_pull(path): + # Check if the path is a git repository + if not os.path.exists(os.path.join(path, '.git')): + raise ValueError('Not a git repository') + + # Pull the latest changes from the remote repository + if platform.system() == "Windows": + return __win_check_git_pull(path) + else: + repo = git.Repo(path) + + print(f"path={path} / repo.is_dirty: {repo.is_dirty()}") + + if repo.is_dirty(): + repo.git.stash() + + if repo.head.is_detached: + switch_to_default_branch(repo) + + origin = repo.remote(name='origin') + origin.pull() + repo.git.submodule('update', '--init', '--recursive') + + repo.close() + + return True + + +async def get_data(uri, silent=False): + if not silent: + print(f"FETCH DATA from: {uri}") + + if uri.startswith("http"): + async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session: + async with session.get(uri) as resp: + json_text = await resp.text() + else: + with cache_lock: + with open(uri, "r", encoding="utf-8") as f: + json_text = f.read() + + json_obj = json.loads(json_text) + return json_obj + + +def setup_js(): + import nodes + js_dest_path = os.path.join(js_path, "comfyui-manager") + + if hasattr(nodes, "EXTENSION_WEB_DIRS"): + if os.path.exists(js_dest_path): + shutil.rmtree(js_dest_path) + else: + print(f"[WARN] ComfyUI-Manager: Your ComfyUI version is outdated. Please update to the latest version.") + # setup js + if not os.path.exists(js_dest_path): + os.makedirs(js_dest_path) + js_src_path = os.path.join(comfyui_manager_path, "js", "comfyui-manager.js") + + print(f"### ComfyUI-Manager: Copy .js from '{js_src_path}' to '{js_dest_path}'") + shutil.copy(js_src_path, js_dest_path) + + +setup_js() + + +def setup_environment(): + git_exe = get_config()['git_exe'] + + if git_exe != '': + git.Git().update_environment(GIT_PYTHON_GIT_EXECUTABLE=git_exe) + + +setup_environment() + + +# Expand Server api + +import server +from aiohttp import web +import aiohttp +import json +import zipfile +import urllib.request + + +def simple_hash(input_string): + hash_value = 0 + for char in input_string: + hash_value = (hash_value * 31 + ord(char)) % (2**32) + + return hash_value + + +def is_file_created_within_one_day(file_path): + if not os.path.exists(file_path): + return False + + file_creation_time = os.path.getctime(file_path) + current_time = datetime.now().timestamp() + time_difference = current_time - file_creation_time + + return time_difference <= 86400 + + +async def get_data_by_mode(mode, filename): + try: + if mode == "local": + uri = os.path.join(comfyui_manager_path, filename) + json_obj = await get_data(uri) + else: + uri = get_config()['channel_url'] + '/' + filename + cache_uri = str(simple_hash(uri))+'_'+filename + cache_uri = os.path.join(cache_dir, cache_uri) + + if mode == "cache": + if is_file_created_within_one_day(cache_uri): + json_obj = await get_data(cache_uri) + else: + json_obj = await get_data(uri) + with cache_lock: + with open(cache_uri, "w", encoding='utf-8') as file: + json.dump(json_obj, file, indent=4, sort_keys=True) + else: + uri = get_config()['channel_url'] + '/' + filename + json_obj = await get_data(uri) + with cache_lock: + with open(cache_uri, "w", encoding='utf-8') as file: + json.dump(json_obj, file, indent=4, sort_keys=True) + except Exception as e: + print(f"[ComfyUI-Manager] Due to a network error, switching to local mode.\n=> {filename}\n=> {e}") + uri = os.path.join(comfyui_manager_path, filename) + json_obj = await get_data(uri) + + return json_obj + + +def get_model_dir(data): + if data['save_path'] != 'default': + if '..' in data['save_path'] or data['save_path'].startswith('/'): + print(f"[WARN] '{data['save_path']}' is not allowed path. So it will be saved into 'models/etc'.") + base_model = "etc" + else: + if data['save_path'].startswith("custom_nodes"): + base_model = os.path.join(comfy_path, data['save_path']) + else: + base_model = os.path.join(folder_paths.models_dir, data['save_path']) + else: + model_type = data['type'] + if model_type == "checkpoints": + base_model = folder_paths.folder_names_and_paths["checkpoints"][0][0] + elif model_type == "unclip": + base_model = folder_paths.folder_names_and_paths["checkpoints"][0][0] + elif model_type == "VAE": + base_model = folder_paths.folder_names_and_paths["vae"][0][0] + elif model_type == "lora": + base_model = folder_paths.folder_names_and_paths["loras"][0][0] + elif model_type == "T2I-Adapter": + base_model = folder_paths.folder_names_and_paths["controlnet"][0][0] + elif model_type == "T2I-Style": + base_model = folder_paths.folder_names_and_paths["controlnet"][0][0] + elif model_type == "controlnet": + base_model = folder_paths.folder_names_and_paths["controlnet"][0][0] + elif model_type == "clip_vision": + base_model = folder_paths.folder_names_and_paths["clip_vision"][0][0] + elif model_type == "gligen": + base_model = folder_paths.folder_names_and_paths["gligen"][0][0] + elif model_type == "upscale": + base_model = folder_paths.folder_names_and_paths["upscale_models"][0][0] + elif model_type == "embeddings": + base_model = folder_paths.folder_names_and_paths["embeddings"][0][0] + else: + base_model = "etc" + + return base_model + + +def get_model_path(data): + base_model = get_model_dir(data) + return os.path.join(base_model, data['filename']) + + +def check_a_custom_node_installed(item, do_fetch=False, do_update_check=True, do_update=False): + item['installed'] = 'None' + + if item['install_type'] == 'git-clone' and len(item['files']) == 1: + url = item['files'][0] + + if url.endswith("/"): + url = url[:-1] + + dir_name = os.path.splitext(os.path.basename(url))[0].replace(".git", "") + dir_path = os.path.join(custom_nodes_path, dir_name) + if os.path.exists(dir_path): + try: + item['installed'] = 'True' # default + + if cm_global.try_call(api="cm.is_import_failed_extension", name=dir_name): + item['installed'] = 'Fail' + + if do_update_check: + update_state, success = git_repo_has_updates(dir_path, do_fetch, do_update) + if (do_update_check or do_update) and update_state: + item['installed'] = 'Update' + elif do_update and not success: + item['installed'] = 'Fail' + except: + if cm_global.try_call(api="cm.is_import_failed_extension", name=dir_name): + item['installed'] = 'Fail' + else: + item['installed'] = 'True' + + elif os.path.exists(dir_path + ".disabled"): + item['installed'] = 'Disabled' + + else: + item['installed'] = 'False' + + elif item['install_type'] == 'copy' and len(item['files']) == 1: + dir_name = os.path.basename(item['files'][0]) + + if item['files'][0].endswith('.py'): + base_path = custom_nodes_path + elif 'js_path' in item: + base_path = os.path.join(js_path, item['js_path']) + else: + base_path = js_path + + file_path = os.path.join(base_path, dir_name) + if os.path.exists(file_path): + if cm_global.try_call(api="cm.is_import_failed_extension", name=dir_name): + item['installed'] = 'Fail' + else: + item['installed'] = 'True' + elif os.path.exists(file_path + ".disabled"): + item['installed'] = 'Disabled' + else: + item['installed'] = 'False' + + +def check_custom_nodes_installed(json_obj, do_fetch=False, do_update_check=True, do_update=False): + if do_fetch: + print("Start fetching...", end="") + elif do_update: + print("Start updating...", end="") + elif do_update_check: + print("Start update check...", end="") + + def process_custom_node(item): + check_a_custom_node_installed(item, do_fetch, do_update_check, do_update) + + with concurrent.futures.ThreadPoolExecutor(4) as executor: + for item in json_obj['custom_nodes']: + executor.submit(process_custom_node, item) + + if do_fetch: + print(f"\x1b[2K\rFetching done.") + elif do_update: + update_exists = any(item['installed'] == 'Update' for item in json_obj['custom_nodes']) + if update_exists: + print(f"\x1b[2K\rUpdate done.") + else: + print(f"\x1b[2K\rAll extensions are already up-to-date.") + elif do_update_check: + print(f"\x1b[2K\rUpdate check done.") + + +@server.PromptServer.instance.routes.get("/customnode/getmappings") +async def fetch_customnode_mappings(request): + json_obj = await get_data_by_mode(request.rel_url.query["mode"], 'extension-node-map.json') + + all_nodes = set() + patterns = [] + for k, x in json_obj.items(): + all_nodes.update(set(x[0])) + + if 'nodename_pattern' in x[1]: + patterns.append((x[1]['nodename_pattern'], x[0])) + + missing_nodes = set(nodes.NODE_CLASS_MAPPINGS.keys()) - all_nodes + + for x in missing_nodes: + for pat, item in patterns: + if re.match(pat, x): + item.append(x) + + return web.json_response(json_obj, content_type='application/json') + + +@server.PromptServer.instance.routes.get("/customnode/fetch_updates") +async def fetch_updates(request): + try: + json_obj = await get_data_by_mode(request.rel_url.query["mode"], 'custom-node-list.json') + + check_custom_nodes_installed(json_obj, True) + + update_exists = any('custom_nodes' in json_obj and 'installed' in node and node['installed'] == 'Update' for node in + json_obj['custom_nodes']) + + if update_exists: + return web.Response(status=201) + + return web.Response(status=200) + except: + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/customnode/update_all") +async def update_all(request): + try: + save_snapshot_with_postfix('autosave') + + json_obj = await get_data_by_mode(request.rel_url.query["mode"], 'custom-node-list.json') + + check_custom_nodes_installed(json_obj, do_update=True) + + updated = [item['title'] for item in json_obj['custom_nodes'] if item['installed'] == 'Update'] + failed = [item['title'] for item in json_obj['custom_nodes'] if item['installed'] == 'Fail'] + + res = {'updated': updated, 'failed': failed} + + if len(updated) == 0 and len(failed) == 0: + status = 200 + else: + status = 201 + + return web.json_response(res, status=status, content_type='application/json') + except: + return web.Response(status=400) + + +def convert_markdown_to_html(input_text): + pattern_a = re.compile(r'\[a/([^]]+)\]\(([^)]+)\)') + pattern_w = re.compile(r'\[w/([^]]+)\]') + pattern_i = re.compile(r'\[i/([^]]+)\]') + pattern_bold = re.compile(r'\*\*([^*]+)\*\*') + pattern_white = re.compile(r'%%([^*]+)%%') + + def replace_a(match): + return f"{match.group(1)}" + + def replace_w(match): + return f"

{match.group(1)}

" + + def replace_i(match): + return f"

{match.group(1)}

" + + def replace_bold(match): + return f"{match.group(1)}" + + def replace_white(match): + return f"{match.group(1)}" + + input_text = input_text.replace('\\[', '[').replace('\\]', ']').replace('<', '<').replace('>', '>') + + result_text = re.sub(pattern_a, replace_a, input_text) + result_text = re.sub(pattern_w, replace_w, result_text) + result_text = re.sub(pattern_i, replace_i, result_text) + result_text = re.sub(pattern_bold, replace_bold, result_text) + result_text = re.sub(pattern_white, replace_white, result_text) + + return result_text.replace("\n", "
") + + +def populate_markdown(x): + if 'description' in x: + x['description'] = convert_markdown_to_html(x['description']) + + if 'name' in x: + x['name'] = x['name'].replace('<', '<').replace('>', '>') + + if 'title' in x: + x['title'] = x['title'].replace('<', '<').replace('>', '>') + + +@server.PromptServer.instance.routes.get("/customnode/getlist") +async def fetch_customnode_list(request): + if "skip_update" in request.rel_url.query and request.rel_url.query["skip_update"] == "true": + skip_update = True + else: + skip_update = False + + if request.rel_url.query["mode"] == "local": + channel = 'local' + else: + channel = get_config()['channel_url'] + + json_obj = await get_data_by_mode(request.rel_url.query["mode"], 'custom-node-list.json') + + def is_ignored_notice(code): + global version + + if code is not None and code.startswith('#NOTICE_'): + try: + notice_version = [int(x) for x in code[8:].split('.')] + return notice_version[0] < version[0] or (notice_version[0] == version[0] and notice_version[1] <= version[1]) + except Exception: + return False + else: + return False + + + json_obj['custom_nodes'] = [record for record in json_obj['custom_nodes'] if not is_ignored_notice(record.get('author'))] + + check_custom_nodes_installed(json_obj, False, not skip_update) + + for x in json_obj['custom_nodes']: + populate_markdown(x) + + if channel != 'local': + found = 'custom' + + for name, url in get_channel_dict().items(): + if url == channel: + found = name + break + + channel = found + + json_obj['channel'] = channel + + return web.json_response(json_obj, content_type='application/json') + + +@server.PromptServer.instance.routes.get("/alternatives/getlist") +async def fetch_alternatives_list(request): + if "skip_update" in request.rel_url.query and request.rel_url.query["skip_update"] == "true": + skip_update = True + else: + skip_update = False + + alter_json = await get_data_by_mode(request.rel_url.query["mode"], 'alter-list.json') + custom_node_json = await get_data_by_mode(request.rel_url.query["mode"], 'custom-node-list.json') + + fileurl_to_custom_node = {} + + for item in custom_node_json['custom_nodes']: + for fileurl in item['files']: + fileurl_to_custom_node[fileurl] = item + + for item in alter_json['items']: + fileurl = item['id'] + if fileurl in fileurl_to_custom_node: + custom_node = fileurl_to_custom_node[fileurl] + check_a_custom_node_installed(custom_node, not skip_update) + + populate_markdown(item) + populate_markdown(custom_node) + item['custom_node'] = custom_node + + return web.json_response(alter_json, content_type='application/json') + + +def check_model_installed(json_obj): + def process_model(item): + model_path = get_model_path(item) + item['installed'] = 'None' + + if model_path is not None: + if os.path.exists(model_path): + item['installed'] = 'True' + else: + item['installed'] = 'False' + + with concurrent.futures.ThreadPoolExecutor(8) as executor: + for item in json_obj['models']: + executor.submit(process_model, item) + + +@server.PromptServer.instance.routes.get("/externalmodel/getlist") +async def fetch_externalmodel_list(request): + json_obj = await get_data_by_mode(request.rel_url.query["mode"], 'model-list.json') + + check_model_installed(json_obj) + + for x in json_obj['models']: + populate_markdown(x) + + return web.json_response(json_obj, content_type='application/json') + + +@server.PromptServer.instance.routes.get("/snapshot/getlist") +async def get_snapshot_list(request): + snapshots_directory = os.path.join(os.path.dirname(__file__), 'snapshots') + items = [f[:-5] for f in os.listdir(snapshots_directory) if f.endswith('.json')] + items.sort(reverse=True) + return web.json_response({'items': items}, content_type='application/json') + + +@server.PromptServer.instance.routes.get("/snapshot/remove") +async def remove_snapshot(request): + try: + target = request.rel_url.query["target"] + + path = os.path.join(os.path.dirname(__file__), 'snapshots', f"{target}.json") + if os.path.exists(path): + os.remove(path) + + return web.Response(status=200) + except: + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/snapshot/restore") +async def remove_snapshot(request): + try: + target = request.rel_url.query["target"] + + path = os.path.join(os.path.dirname(__file__), 'snapshots', f"{target}.json") + if os.path.exists(path): + if not os.path.exists(startup_script_path): + os.makedirs(startup_script_path) + + target_path = os.path.join(startup_script_path, "restore-snapshot.json") + shutil.copy(path, target_path) + + print(f"Snapshot restore scheduled: `{target}`") + return web.Response(status=200) + + print(f"Snapshot file not found: `{path}`") + return web.Response(status=400) + except: + return web.Response(status=400) + + +def get_current_snapshot(): + # Get ComfyUI hash + repo_path = os.path.dirname(folder_paths.__file__) + + if not os.path.exists(os.path.join(repo_path, '.git')): + print(f"ComfyUI update fail: The installed ComfyUI does not have a Git repository.") + return web.Response(status=400) + + repo = git.Repo(repo_path) + comfyui_commit_hash = repo.head.commit.hexsha + + git_custom_nodes = {} + file_custom_nodes = [] + + # Get custom nodes hash + for path in os.listdir(custom_nodes_path): + fullpath = os.path.join(custom_nodes_path, path) + + if os.path.isdir(fullpath): + is_disabled = path.endswith(".disabled") + + try: + git_dir = os.path.join(fullpath, '.git') + + if not os.path.exists(git_dir): + continue + + repo = git.Repo(fullpath) + commit_hash = repo.head.commit.hexsha + url = repo.remotes.origin.url + git_custom_nodes[url] = { + 'hash': commit_hash, + 'disabled': is_disabled + } + + except: + print(f"Failed to extract snapshots for the custom node '{path}'.") + + elif path.endswith('.py'): + is_disabled = path.endswith(".py.disabled") + filename = os.path.basename(path) + item = { + 'filename': filename, + 'disabled': is_disabled + } + + file_custom_nodes.append(item) + + return { + 'comfyui': comfyui_commit_hash, + 'git_custom_nodes': git_custom_nodes, + 'file_custom_nodes': file_custom_nodes, + } + + +def save_snapshot_with_postfix(postfix): + now = datetime.now() + + date_time_format = now.strftime("%Y-%m-%d_%H-%M-%S") + file_name = f"{date_time_format}_{postfix}" + + path = os.path.join(os.path.dirname(__file__), 'snapshots', f"{file_name}.json") + with open(path, "w") as json_file: + json.dump(get_current_snapshot(), json_file, indent=4) + + +@server.PromptServer.instance.routes.get("/snapshot/get_current") +async def get_current_snapshot_api(request): + try: + return web.json_response(get_current_snapshot(), content_type='application/json') + except: + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/snapshot/save") +async def save_snapshot(request): + try: + save_snapshot_with_postfix('snapshot') + return web.Response(status=200) + except: + return web.Response(status=400) + + +def unzip_install(files): + temp_filename = 'manager-temp.zip' + for url in files: + if url.endswith("/"): + url = url[:-1] + try: + headers = { + 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'} + + req = urllib.request.Request(url, headers=headers) + response = urllib.request.urlopen(req) + data = response.read() + + with open(temp_filename, 'wb') as f: + f.write(data) + + with zipfile.ZipFile(temp_filename, 'r') as zip_ref: + zip_ref.extractall(custom_nodes_path) + + os.remove(temp_filename) + except Exception as e: + print(f"Install(unzip) error: {url} / {e}", file=sys.stderr) + return False + + print("Installation was successful.") + return True + + +def download_url_with_agent(url, save_path): + try: + headers = { + 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'} + + req = urllib.request.Request(url, headers=headers) + response = urllib.request.urlopen(req) + data = response.read() + + if not os.path.exists(os.path.dirname(save_path)): + os.makedirs(os.path.dirname(save_path)) + + with open(save_path, 'wb') as f: + f.write(data) + + except Exception as e: + print(f"Download error: {url} / {e}", file=sys.stderr) + return False + + print("Installation was successful.") + return True + + +def copy_install(files, js_path_name=None): + for url in files: + if url.endswith("/"): + url = url[:-1] + try: + if url.endswith(".py"): + download_url(url, custom_nodes_path) + else: + path = os.path.join(js_path, js_path_name) if js_path_name is not None else js_path + if not os.path.exists(path): + os.makedirs(path) + download_url(url, path) + + except Exception as e: + print(f"Install(copy) error: {url} / {e}", file=sys.stderr) + return False + + print("Installation was successful.") + return True + + +def copy_uninstall(files, js_path_name='.'): + for url in files: + if url.endswith("/"): + url = url[:-1] + dir_name = os.path.basename(url) + base_path = custom_nodes_path if url.endswith('.py') else os.path.join(js_path, js_path_name) + file_path = os.path.join(base_path, dir_name) + + try: + if os.path.exists(file_path): + os.remove(file_path) + elif os.path.exists(file_path + ".disabled"): + os.remove(file_path + ".disabled") + except Exception as e: + print(f"Uninstall(copy) error: {url} / {e}", file=sys.stderr) + return False + + print("Uninstallation was successful.") + return True + + +def copy_set_active(files, is_disable, js_path_name='.'): + if is_disable: + action_name = "Disable" + else: + action_name = "Enable" + + for url in files: + if url.endswith("/"): + url = url[:-1] + dir_name = os.path.basename(url) + base_path = custom_nodes_path if url.endswith('.py') else os.path.join(js_path, js_path_name) + file_path = os.path.join(base_path, dir_name) + + try: + if is_disable: + current_name = file_path + new_name = file_path + ".disabled" + else: + current_name = file_path + ".disabled" + new_name = file_path + + os.rename(current_name, new_name) + + except Exception as e: + print(f"{action_name}(copy) error: {url} / {e}", file=sys.stderr) + + return False + + print(f"{action_name} was successful.") + return True + + +def execute_install_script(url, repo_path, lazy_mode=False): + install_script_path = os.path.join(repo_path, "install.py") + requirements_path = os.path.join(repo_path, "requirements.txt") + + if lazy_mode: + install_cmd = ["#LAZY-INSTALL-SCRIPT", sys.executable] + try_install_script(url, repo_path, install_cmd) + else: + if os.path.exists(requirements_path): + print("Install: pip packages") + with open(requirements_path, "r") as requirements_file: + for line in requirements_file: + package_name = line.strip() + if package_name: + install_cmd = [sys.executable, "-m", "pip", "install", package_name] + if package_name.strip() != "": + try_install_script(url, repo_path, install_cmd) + + if os.path.exists(install_script_path): + print(f"Install: install script") + install_cmd = [sys.executable, "install.py"] + try_install_script(url, repo_path, install_cmd) + + return True + + +class GitProgress(RemoteProgress): + def __init__(self): + super().__init__() + self.pbar = tqdm() + + def update(self, op_code, cur_count, max_count=None, message=''): + self.pbar.total = max_count + self.pbar.n = cur_count + self.pbar.pos = 0 + self.pbar.refresh() + + +def is_valid_url(url): + try: + result = urlparse(url) + return all([result.scheme, result.netloc]) + except ValueError: + return False + + +def gitclone_install(files): + print(f"install: {files}") + for url in files: + if not is_valid_url(url): + print(f"Invalid git url: '{url}'") + return False + + if url.endswith("/"): + url = url[:-1] + try: + print(f"Download: git clone '{url}'") + repo_name = os.path.splitext(os.path.basename(url))[0] + repo_path = os.path.join(custom_nodes_path, repo_name) + + # Clone the repository from the remote URL + if platform.system() == 'Windows': + res = run_script([sys.executable, git_script_path, "--clone", custom_nodes_path, url]) + if res != 0: + return False + else: + repo = git.Repo.clone_from(url, repo_path, recursive=True, progress=GitProgress()) + repo.git.clear_cache() + repo.close() + + if not execute_install_script(url, repo_path): + return False + + except Exception as e: + print(f"Install(git-clone) error: {url} / {e}", file=sys.stderr) + return False + + print("Installation was successful.") + return True + + +def gitclone_fix(files): + print(f"Try fixing: {files}") + for url in files: + if not is_valid_url(url): + print(f"Invalid git url: '{url}'") + return False + + if url.endswith("/"): + url = url[:-1] + try: + repo_name = os.path.splitext(os.path.basename(url))[0] + repo_path = os.path.join(custom_nodes_path, repo_name) + + if not execute_install_script(url, repo_path): + return False + + except Exception as e: + print(f"Install(git-clone) error: {url} / {e}", file=sys.stderr) + return False + + print(f"Attempt to fixing '{files}' is done.") + return True + + +def pip_install(packages): + install_cmd = ['#FORCE', sys.executable, "-m", "pip", "install", '-U'] + packages + try_install_script('pip install via manager', '.', install_cmd) + + +import platform +import subprocess +import time + + +def rmtree(path): + retry_count = 3 + + while True: + try: + retry_count -= 1 + + if platform.system() == "Windows": + run_script(['attrib', '-R', path + '\\*', '/S']) + shutil.rmtree(path) + + return True + + except Exception as ex: + print(f"ex: {ex}") + time.sleep(3) + + if retry_count < 0: + raise ex + + print(f"Uninstall retry({retry_count})") + + +def gitclone_uninstall(files): + import shutil + import os + + print(f"uninstall: {files}") + for url in files: + if url.endswith("/"): + url = url[:-1] + try: + dir_name = os.path.splitext(os.path.basename(url))[0].replace(".git", "") + dir_path = os.path.join(custom_nodes_path, dir_name) + + # safety check + if dir_path == '/' or dir_path[1:] == ":/" or dir_path == '': + print(f"Uninstall(git-clone) error: invalid path '{dir_path}' for '{url}'") + return False + + install_script_path = os.path.join(dir_path, "uninstall.py") + disable_script_path = os.path.join(dir_path, "disable.py") + if os.path.exists(install_script_path): + uninstall_cmd = [sys.executable, "uninstall.py"] + code = run_script(uninstall_cmd, cwd=dir_path) + + if code != 0: + print(f"An error occurred during the execution of the uninstall.py script. Only the '{dir_path}' will be deleted.") + elif os.path.exists(disable_script_path): + disable_script = [sys.executable, "disable.py"] + code = run_script(disable_script, cwd=dir_path) + if code != 0: + print(f"An error occurred during the execution of the disable.py script. Only the '{dir_path}' will be deleted.") + + if os.path.exists(dir_path): + rmtree(dir_path) + elif os.path.exists(dir_path + ".disabled"): + rmtree(dir_path + ".disabled") + except Exception as e: + print(f"Uninstall(git-clone) error: {url} / {e}", file=sys.stderr) + return False + + print("Uninstallation was successful.") + return True + + +def gitclone_set_active(files, is_disable): + import os + + if is_disable: + action_name = "Disable" + else: + action_name = "Enable" + + print(f"{action_name}: {files}") + for url in files: + if url.endswith("/"): + url = url[:-1] + try: + dir_name = os.path.splitext(os.path.basename(url))[0].replace(".git", "") + dir_path = os.path.join(custom_nodes_path, dir_name) + + # safey check + if dir_path == '/' or dir_path[1:] == ":/" or dir_path == '': + print(f"{action_name}(git-clone) error: invalid path '{dir_path}' for '{url}'") + return False + + if is_disable: + current_path = dir_path + new_path = dir_path + ".disabled" + else: + current_path = dir_path + ".disabled" + new_path = dir_path + + os.rename(current_path, new_path) + + if is_disable: + if os.path.exists(os.path.join(new_path, "disable.py")): + disable_script = [sys.executable, "disable.py"] + try_install_script(url, new_path, disable_script) + else: + if os.path.exists(os.path.join(new_path, "enable.py")): + enable_script = [sys.executable, "enable.py"] + try_install_script(url, new_path, enable_script) + + except Exception as e: + print(f"{action_name}(git-clone) error: {url} / {e}", file=sys.stderr) + return False + + print(f"{action_name} was successful.") + return True + + +def gitclone_update(files): + import os + + print(f"Update: {files}") + for url in files: + if url.endswith("/"): + url = url[:-1] + try: + repo_name = os.path.splitext(os.path.basename(url))[0].replace(".git", "") + repo_path = os.path.join(custom_nodes_path, repo_name) + git_pull(repo_path) + + if not execute_install_script(url, repo_path, lazy_mode=True): + return False + + except Exception as e: + print(f"Update(git-clone) error: {url} / {e}", file=sys.stderr) + return False + + print("Update was successful.") + return True + + +@server.PromptServer.instance.routes.post("/customnode/install") +async def install_custom_node(request): + json_data = await request.json() + + install_type = json_data['install_type'] + + print(f"Install custom node '{json_data['title']}'") + + res = False + + if len(json_data['files']) == 0: + return web.Response(status=400) + + if install_type == "unzip": + res = unzip_install(json_data['files']) + + if install_type == "copy": + js_path_name = json_data['js_path'] if 'js_path' in json_data else '.' + res = copy_install(json_data['files'], js_path_name) + + elif install_type == "git-clone": + res = gitclone_install(json_data['files']) + + if 'pip' in json_data: + for pname in json_data['pip']: + install_cmd = [sys.executable, "-m", "pip", "install", pname] + try_install_script(json_data['files'][0], ".", install_cmd) + + if res: + print(f"After restarting ComfyUI, please refresh the browser.") + return web.json_response({}, content_type='application/json') + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.post("/customnode/fix") +async def fix_custom_node(request): + json_data = await request.json() + + install_type = json_data['install_type'] + + print(f"Install custom node '{json_data['title']}'") + + res = False + + if len(json_data['files']) == 0: + return web.Response(status=400) + + if install_type == "git-clone": + res = gitclone_fix(json_data['files']) + else: + return web.Response(status=400) + + if 'pip' in json_data: + for pname in json_data['pip']: + install_cmd = [sys.executable, "-m", "pip", "install", '-U', pname] + try_install_script(json_data['files'][0], ".", install_cmd) + + if res: + print(f"After restarting ComfyUI, please refresh the browser.") + return web.json_response({}, content_type='application/json') + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/customnode/install/git_url") +async def install_custom_node_git_url(request): + res = False + if "url" in request.rel_url.query: + url = request.rel_url.query['url'] + res = gitclone_install([url]) + + if res: + print(f"After restarting ComfyUI, please refresh the browser.") + return web.Response(status=200) + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/customnode/install/pip") +async def install_custom_node_git_url(request): + res = False + if "packages" in request.rel_url.query: + packages = request.rel_url.query['packages'] + pip_install(packages.split(' ')) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.post("/customnode/uninstall") +async def uninstall_custom_node(request): + json_data = await request.json() + + install_type = json_data['install_type'] + + print(f"Uninstall custom node '{json_data['title']}'") + + res = False + + if install_type == "copy": + js_path_name = json_data['js_path'] if 'js_path' in json_data else '.' + res = copy_uninstall(json_data['files'], js_path_name) + + elif install_type == "git-clone": + res = gitclone_uninstall(json_data['files']) + + if res: + print(f"After restarting ComfyUI, please refresh the browser.") + return web.json_response({}, content_type='application/json') + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.post("/customnode/update") +async def update_custom_node(request): + json_data = await request.json() + + install_type = json_data['install_type'] + + print(f"Update custom node '{json_data['title']}'") + + res = False + + if install_type == "git-clone": + res = gitclone_update(json_data['files']) + + if res: + print(f"After restarting ComfyUI, please refresh the browser.") + return web.json_response({}, content_type='application/json') + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/comfyui_manager/update_comfyui") +async def update_comfyui(request): + print(f"Update ComfyUI") + + try: + repo_path = os.path.dirname(folder_paths.__file__) + + if not os.path.exists(os.path.join(repo_path, '.git')): + print(f"ComfyUI update fail: The installed ComfyUI does not have a Git repository.") + return web.Response(status=400) + + # version check + repo = git.Repo(repo_path) + + if repo.head.is_detached: + switch_to_default_branch(repo) + + current_branch = repo.active_branch + branch_name = current_branch.name + + remote_name = 'origin' + remote = repo.remote(name=remote_name) + + try: + remote.fetch() + except Exception as e: + if 'detected dubious' in str(e): + print(f"[ComfyUI-Manager] Try fixing 'dubious repository' error on 'ComfyUI' repository") + safedir_path = comfy_path.replace('\\', '/') + subprocess.run(['git', 'config', '--global', '--add', 'safe.directory', safedir_path]) + try: + remote.fetch() + except Exception: + print(f"\n[ComfyUI-Manager] Failed to fixing repository setup. Please execute this command on cmd: \n" + f"-----------------------------------------------------------------------------------------\n" + f'git config --global --add safe.directory "{safedir_path}"\n' + f"-----------------------------------------------------------------------------------------\n") + + commit_hash = repo.head.commit.hexsha + remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha + + if commit_hash != remote_commit_hash: + git_pull(repo_path) + execute_install_script("ComfyUI", repo_path) + return web.Response(status=201) + else: + return web.Response(status=200) + except Exception as e: + print(f"ComfyUI update fail: {e}", file=sys.stderr) + pass + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.post("/customnode/toggle_active") +async def toggle_active(request): + json_data = await request.json() + + install_type = json_data['install_type'] + is_disabled = json_data['installed'] == "Disabled" + + print(f"Update custom node '{json_data['title']}'") + + res = False + + if install_type == "git-clone": + res = gitclone_set_active(json_data['files'], not is_disabled) + elif install_type == "copy": + res = copy_set_active(json_data['files'], not is_disabled, json_data.get('js_path', None)) + + if res: + return web.json_response({}, content_type='application/json') + + return web.Response(status=400) + + +@server.PromptServer.instance.routes.post("/model/install") +async def install_model(request): + json_data = await request.json() + + model_path = get_model_path(json_data) + + res = False + + try: + if model_path is not None: + print(f"Install model '{json_data['name']}' into '{model_path}'") + + model_url = json_data['url'] + + if model_url.startswith('https://github.com') or model_url.startswith('https://huggingface.co') or model_url.startswith('https://heibox.uni-heidelberg.de'): + model_dir = get_model_dir(json_data) + download_url(model_url, model_dir, filename=json_data['filename']) + + return web.json_response({}, content_type='application/json') + else: + res = download_url_with_agent(model_url, model_path) + else: + print(f"Model installation error: invalid model type - {json_data['type']}") + + if res: + return web.json_response({}, content_type='application/json') + except Exception as e: + print(f"[ERROR] {e}", file=sys.stderr) + pass + + return web.Response(status=400) + + +class ManagerTerminalHook: + def write_stderr(self, msg): + server.PromptServer.instance.send_sync("manager-terminal-feedback", {"data": msg}) + + def write_stdout(self, msg): + server.PromptServer.instance.send_sync("manager-terminal-feedback", {"data": msg}) + + +manager_terminal_hook = ManagerTerminalHook() + + +@server.PromptServer.instance.routes.get("/manager/terminal") +async def terminal_mode(request): + if "mode" in request.rel_url.query: + if request.rel_url.query['mode'] == 'true': + sys.__comfyui_manager_terminal_hook.add_hook('cm', manager_terminal_hook) + else: + sys.__comfyui_manager_terminal_hook.remove_hook('cm') + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/preview_method") +async def preview_method(request): + if "value" in request.rel_url.query: + set_preview_method(request.rel_url.query['value']) + write_config() + else: + return web.Response(text=get_current_preview_method(), status=200) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/badge_mode") +async def badge_mode(request): + if "value" in request.rel_url.query: + set_badge_mode(request.rel_url.query['value']) + write_config() + else: + return web.Response(text=get_config()['badge_mode'], status=200) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/default_ui") +async def default_ui_mode(request): + if "value" in request.rel_url.query: + set_default_ui_mode(request.rel_url.query['value']) + write_config() + else: + return web.Response(text=get_config()['default_ui'], status=200) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/component/policy") +async def component_policy(request): + if "value" in request.rel_url.query: + set_component_policy(request.rel_url.query['value']) + write_config() + else: + return web.Response(text=get_config()['component_policy'], status=200) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/dbl_click/policy") +async def dbl_click_policy(request): + if "value" in request.rel_url.query: + set_double_click_policy(request.rel_url.query['value']) + write_config() + else: + return web.Response(text=get_config()['double_click_policy'], status=200) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/channel_url_list") +async def channel_url_list(request): + channels = get_channel_dict() + if "value" in request.rel_url.query: + channel_url = channels.get(request.rel_url.query['value']) + if channel_url is not None: + get_config()['channel_url'] = channel_url + write_config() + else: + selected = 'custom' + selected_url = get_config()['channel_url'] + + for name, url in channels.items(): + if url == selected_url: + selected = name + break + + res = {'selected': selected, + 'list': get_channel_list()} + return web.json_response(res, status=200) + + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/notice") +async def get_notice(request): + url = "github.com" + path = "/ltdrdata/ltdrdata.github.io/wiki/News" + + async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session: + async with session.get(f"https://{url}{path}") as response: + if response.status == 200: + # html_content = response.read().decode('utf-8') + html_content = await response.text() + + pattern = re.compile(r'
([\s\S]*?)
') + match = pattern.search(html_content) + + if match: + markdown_content = match.group(1) + markdown_content += f"
ComfyUI: {comfy_ui_revision}[{comfy_ui_hash[:6]}]({comfy_ui_commit_datetime.date()})" + # markdown_content += f"
         ()" + markdown_content += f"
Manager: {version_str}" + + try: + if comfy_ui_required_commit_datetime.date() > comfy_ui_commit_datetime.date(): + markdown_content = f'

Your ComfyUI is too OUTDATED!!!

' + markdown_content + except: + pass + + return web.Response(text=markdown_content, status=200) + else: + return web.Response(text="Unable to retrieve Notice", status=200) + else: + return web.Response(text="Unable to retrieve Notice", status=200) + + +@server.PromptServer.instance.routes.get("/manager/reboot") +def restart(self): + try: + sys.stdout.close_log() + except Exception as e: + pass + + return os.execv(sys.executable, [sys.executable] + sys.argv) + + +def sanitize_filename(input_string): + # 알파벳, 숫자, 및 밑줄 이외의 문자를 밑줄로 대체 + result_string = re.sub(r'[^a-zA-Z0-9_]', '_', input_string) + return result_string + + +@server.PromptServer.instance.routes.post("/manager/component/save") +async def save_component(request): + try: + data = await request.json() + name = data['name'] + workflow = data['workflow'] + + if not os.path.exists(components_path): + os.mkdir(components_path) + + if 'packname' in workflow and workflow['packname'] != '': + sanitized_name = sanitize_filename(workflow['packname'])+'.pack' + else: + sanitized_name = sanitize_filename(name)+'.json' + + filepath = os.path.join(components_path, sanitized_name) + components = {} + if os.path.exists(filepath): + with open(filepath) as f: + components = json.load(f) + + components[name] = workflow + + with open(filepath, 'w') as f: + json.dump(components, f, indent=4, sort_keys=True) + return web.Response(text=filepath, status=200) + except: + return web.Response(status=400) + + +@server.PromptServer.instance.routes.post("/manager/component/loads") +async def load_components(request): + try: + json_files = [f for f in os.listdir(components_path) if f.endswith('.json')] + pack_files = [f for f in os.listdir(components_path) if f.endswith('.pack')] + + components = {} + for json_file in json_files + pack_files: + file_path = os.path.join(components_path, json_file) + with open(file_path, 'r') as file: + try: + # When there is a conflict between the .pack and the .json, the pack takes precedence and overrides. + components.update(json.load(file)) + except json.JSONDecodeError as e: + print(f"[ComfyUI-Manager] Error decoding component file in file {json_file}: {e}") + + return web.json_response(components) + except Exception as e: + print(f"[ComfyUI-Manager] failed to load components\n{e}") + return web.Response(status=400) + + +@server.PromptServer.instance.routes.get("/manager/share_option") +async def share_option(request): + if "value" in request.rel_url.query: + get_config()['share_option'] = request.rel_url.query['value'] + write_config() + else: + return web.Response(text=get_config()['share_option'], status=200) + + return web.Response(status=200) + + +def get_openart_auth(): + if not os.path.exists(os.path.join(comfyui_manager_path, ".openart_key")): + return None + try: + with open(os.path.join(comfyui_manager_path, ".openart_key"), "r") as f: + openart_key = f.read().strip() + return openart_key if openart_key else None + except: + return None + + +def get_matrix_auth(): + if not os.path.exists(os.path.join(comfyui_manager_path, "matrix_auth")): + return None + try: + with open(os.path.join(comfyui_manager_path, "matrix_auth"), "r") as f: + matrix_auth = f.read() + homeserver, username, password = matrix_auth.strip().split("\n") + if not homeserver or not username or not password: + return None + return { + "homeserver": homeserver, + "username": username, + "password": password, + } + except: + return None + + +def get_comfyworkflows_auth(): + if not os.path.exists(os.path.join(comfyui_manager_path, "comfyworkflows_sharekey")): + return None + try: + with open(os.path.join(comfyui_manager_path, "comfyworkflows_sharekey"), "r") as f: + share_key = f.read() + if not share_key.strip(): + return None + return share_key + except: + return None + + +def get_youml_settings(): + if not os.path.exists(os.path.join(comfyui_manager_path, ".youml")): + return None + try: + with open(os.path.join(comfyui_manager_path, ".youml"), "r") as f: + youml_settings = f.read().strip() + return youml_settings if youml_settings else None + except: + return None + + +def set_youml_settings(settings): + with open(os.path.join(comfyui_manager_path, ".youml"), "w") as f: + f.write(settings) + + +@server.PromptServer.instance.routes.get("/manager/get_openart_auth") +async def api_get_openart_auth(request): + # print("Getting stored Matrix credentials...") + openart_key = get_openart_auth() + if not openart_key: + return web.Response(status=404) + return web.json_response({"openart_key": openart_key}) + + +@server.PromptServer.instance.routes.post("/manager/set_openart_auth") +async def api_set_openart_auth(request): + json_data = await request.json() + openart_key = json_data['openart_key'] + with open(os.path.join(comfyui_manager_path, ".openart_key"), "w") as f: + f.write(openart_key) + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/get_matrix_auth") +async def api_get_matrix_auth(request): + # print("Getting stored Matrix credentials...") + matrix_auth = get_matrix_auth() + if not matrix_auth: + return web.Response(status=404) + return web.json_response(matrix_auth) + + +@server.PromptServer.instance.routes.get("/manager/youml/settings") +async def api_get_youml_settings(request): + youml_settings = get_youml_settings() + if not youml_settings: + return web.Response(status=404) + return web.json_response(json.loads(youml_settings)) + + +@server.PromptServer.instance.routes.post("/manager/youml/settings") +async def api_set_youml_settings(request): + json_data = await request.json() + set_youml_settings(json.dumps(json_data)) + return web.Response(status=200) + + +@server.PromptServer.instance.routes.get("/manager/get_comfyworkflows_auth") +async def api_get_comfyworkflows_auth(request): + # Check if the user has provided Matrix credentials in a file called 'matrix_accesstoken' + # in the same directory as the ComfyUI base folder + # print("Getting stored Comfyworkflows.com auth...") + comfyworkflows_auth = get_comfyworkflows_auth() + if not comfyworkflows_auth: + return web.Response(status=404) + return web.json_response({"comfyworkflows_sharekey" : comfyworkflows_auth}) + + +def set_matrix_auth(json_data): + homeserver = json_data['homeserver'] + username = json_data['username'] + password = json_data['password'] + with open(os.path.join(comfyui_manager_path, "matrix_auth"), "w") as f: + f.write("\n".join([homeserver, username, password])) + + +def set_comfyworkflows_auth(comfyworkflows_sharekey): + with open(os.path.join(comfyui_manager_path, "comfyworkflows_sharekey"), "w") as f: + f.write(comfyworkflows_sharekey) + + +def has_provided_matrix_auth(matrix_auth): + return matrix_auth['homeserver'].strip() and matrix_auth['username'].strip() and matrix_auth['password'].strip() + + +def has_provided_comfyworkflows_auth(comfyworkflows_sharekey): + return comfyworkflows_sharekey.strip() + + + +def extract_model_file_names(json_data): + """Extract unique file names from the input JSON data.""" + file_names = set() + model_filename_extensions = {'.safetensors', '.ckpt', '.pt', '.pth', '.bin'} + + # Recursively search for file names in the JSON data + def recursive_search(data): + if isinstance(data, dict): + for value in data.values(): + recursive_search(value) + elif isinstance(data, list): + for item in data: + recursive_search(item) + elif isinstance(data, str) and '.' in data: + file_names.add(os.path.basename(data)) # file_names.add(data) + + recursive_search(json_data) + return [f for f in list(file_names) if os.path.splitext(f)[1] in model_filename_extensions] + +def find_file_paths(base_dir, file_names): + """Find the paths of the files in the base directory.""" + file_paths = {} + + for root, dirs, files in os.walk(base_dir): + # Exclude certain directories + dirs[:] = [d for d in dirs if d not in ['.git']] + + for file in files: + if file in file_names: + file_paths[file] = os.path.join(root, file) + return file_paths + +def compute_sha256_checksum(filepath): + """Compute the SHA256 checksum of a file, in chunks""" + sha256 = hashlib.sha256() + with open(filepath, 'rb') as f: + for chunk in iter(lambda: f.read(4096), b''): + sha256.update(chunk) + return sha256.hexdigest() + +@server.PromptServer.instance.routes.post("/manager/share") +async def share_art(request): + # get json data + json_data = await request.json() + + matrix_auth = json_data['matrix_auth'] + comfyworkflows_sharekey = json_data['cw_auth']['cw_sharekey'] + + set_matrix_auth(matrix_auth) + set_comfyworkflows_auth(comfyworkflows_sharekey) + + share_destinations = json_data['share_destinations'] + credits = json_data['credits'] + title = json_data['title'] + description = json_data['description'] + is_nsfw = json_data['is_nsfw'] + prompt = json_data['prompt'] + potential_outputs = json_data['potential_outputs'] + selected_output_index = json_data['selected_output_index'] + + try: + output_to_share = potential_outputs[int(selected_output_index)] + except: + # for now, pick the first output + output_to_share = potential_outputs[0] + + assert output_to_share['type'] in ('image', 'output') + output_dir = folder_paths.get_output_directory() + + if output_to_share['type'] == 'image': + asset_filename = output_to_share['image']['filename'] + asset_subfolder = output_to_share['image']['subfolder'] + + if output_to_share['image']['type'] == 'temp': + output_dir = folder_paths.get_temp_directory() + else: + asset_filename = output_to_share['output']['filename'] + asset_subfolder = output_to_share['output']['subfolder'] + + if asset_subfolder: + asset_filepath = os.path.join(output_dir, asset_subfolder, asset_filename) + else: + asset_filepath = os.path.join(output_dir, asset_filename) + + # get the mime type of the asset + assetFileType = mimetypes.guess_type(asset_filepath)[0] + + share_website_host = "UNKNOWN" + if "comfyworkflows" in share_destinations: + share_website_host = "https://comfyworkflows.com" + share_endpoint = f"{share_website_host}/api" + + # get presigned urls + async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session: + async with session.post( + f"{share_endpoint}/get_presigned_urls", + json={ + "assetFileName": asset_filename, + "assetFileType": assetFileType, + "workflowJsonFileName" : 'workflow.json', + "workflowJsonFileType" : 'application/json', + }, + ) as resp: + assert resp.status == 200 + presigned_urls_json = await resp.json() + assetFilePresignedUrl = presigned_urls_json["assetFilePresignedUrl"] + assetFileKey = presigned_urls_json["assetFileKey"] + workflowJsonFilePresignedUrl = presigned_urls_json["workflowJsonFilePresignedUrl"] + workflowJsonFileKey = presigned_urls_json["workflowJsonFileKey"] + + # upload asset + async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session: + async with session.put(assetFilePresignedUrl, data=open(asset_filepath, "rb")) as resp: + assert resp.status == 200 + + # upload workflow json + async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session: + async with session.put(workflowJsonFilePresignedUrl, data=json.dumps(prompt['workflow']).encode('utf-8')) as resp: + assert resp.status == 200 + + model_filenames = extract_model_file_names(prompt['workflow']) + model_file_paths = find_file_paths(folder_paths.base_path, model_filenames) + + models_info = {} + for filename, filepath in model_file_paths.items(): + models_info[filename] = { + "filename": filename, + "sha256_checksum": compute_sha256_checksum(filepath), + "relative_path": os.path.relpath(filepath, folder_paths.base_path), + } + + # make a POST request to /api/upload_workflow with form data key values + async with aiohttp.ClientSession(trust_env=True, connector=aiohttp.TCPConnector(verify_ssl=False)) as session: + form = aiohttp.FormData() + if comfyworkflows_sharekey: + form.add_field("shareKey", comfyworkflows_sharekey) + form.add_field("source", "comfyui_manager") + form.add_field("assetFileKey", assetFileKey) + form.add_field("assetFileType", assetFileType) + form.add_field("workflowJsonFileKey", workflowJsonFileKey) + form.add_field("sharedWorkflowWorkflowJsonString", json.dumps(prompt['workflow'])) + form.add_field("sharedWorkflowPromptJsonString", json.dumps(prompt['output'])) + form.add_field("shareWorkflowCredits", credits) + form.add_field("shareWorkflowTitle", title) + form.add_field("shareWorkflowDescription", description) + form.add_field("shareWorkflowIsNSFW", str(is_nsfw).lower()) + form.add_field("currentSnapshot", json.dumps(get_current_snapshot())) + form.add_field("modelsInfo", json.dumps(models_info)) + + async with session.post( + f"{share_endpoint}/upload_workflow", + data=form, + ) as resp: + assert resp.status == 200 + upload_workflow_json = await resp.json() + workflowId = upload_workflow_json["workflowId"] + + # check if the user has provided Matrix credentials + if "matrix" in share_destinations: + comfyui_share_room_id = '!LGYSoacpJPhIfBqVfb:matrix.org' + filename = os.path.basename(asset_filepath) + content_type = assetFileType + + try: + from matrix_client.api import MatrixHttpApi + from matrix_client.client import MatrixClient + + homeserver = 'matrix.org' + if matrix_auth: + homeserver = matrix_auth.get('homeserver', 'matrix.org') + homeserver = homeserver.replace("http://", "https://") + if not homeserver.startswith("https://"): + homeserver = "https://" + homeserver + + client = MatrixClient(homeserver) + try: + token = client.login(username=matrix_auth['username'], password=matrix_auth['password']) + if not token: + return web.json_response({"error" : "Invalid Matrix credentials."}, content_type='application/json', status=400) + except: + return web.json_response({"error" : "Invalid Matrix credentials."}, content_type='application/json', status=400) + + matrix = MatrixHttpApi(homeserver, token=token) + with open(asset_filepath, 'rb') as f: + mxc_url = matrix.media_upload(f.read(), content_type, filename=filename)['content_uri'] + + workflow_json_mxc_url = matrix.media_upload(prompt['workflow'], 'application/json', filename='workflow.json')['content_uri'] + + text_content = "" + if title: + text_content += f"{title}\n" + if description: + text_content += f"{description}\n" + if credits: + text_content += f"\ncredits: {credits}\n" + response = matrix.send_message(comfyui_share_room_id, text_content) + response = matrix.send_content(comfyui_share_room_id, mxc_url, filename, 'm.image') + response = matrix.send_content(comfyui_share_room_id, workflow_json_mxc_url, 'workflow.json', 'm.file') + except: + import traceback + traceback.print_exc() + return web.json_response({"error": "An error occurred when sharing your art to Matrix."}, content_type='application/json', status=500) + + return web.json_response({ + "comfyworkflows": { + "url": None if "comfyworkflows" not in share_destinations else f"{share_website_host}/workflows/{workflowId}", + }, + "matrix": { + "success": None if "matrix" not in share_destinations else True + } + }, content_type='application/json', status=200) + + +def sanitize(data): + return data.replace("<", "<").replace(">", ">") + + +def lookup_customnode_by_url(data, target): + for x in data['custom_nodes']: + if target in x['files']: + dir_name = os.path.splitext(os.path.basename(target))[0].replace(".git", "") + dir_path = os.path.join(custom_nodes_path, dir_name) + if os.path.exists(dir_path): + x['installed'] = 'True' + elif os.path.exists(dir_path + ".disabled"): + x['installed'] = 'Disabled' + return x + + return None + + +async def _confirm_try_install(sender, custom_node_url, msg): + json_obj = await get_data_by_mode('default', 'custom-node-list.json') + + sender = sanitize(sender) + msg = sanitize(msg) + target = lookup_customnode_by_url(json_obj, custom_node_url) + + if target is not None: + server.PromptServer.instance.send_sync("cm-api-try-install-customnode", + {"sender": sender, "target": target, "msg": msg}) + else: + print(f"[ComfyUI Manager API] Failed to try install - Unknown custom node url '{custom_node_url}'") + + +def confirm_try_install(sender, custom_node_url, msg): + asyncio.run(_confirm_try_install(sender, custom_node_url, msg)) + + +cm_global.register_api('cm.try-install-custom-node', confirm_try_install) + + +import asyncio +async def default_cache_update(): + async def get_cache(filename): + uri = 'https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/' + filename + cache_uri = str(simple_hash(uri)) + '_' + filename + cache_uri = os.path.join(cache_dir, cache_uri) + + json_obj = await get_data(uri, True) + + with cache_lock: + with open(cache_uri, "w", encoding='utf-8') as file: + json.dump(json_obj, file, indent=4, sort_keys=True) + print(f"[ComfyUI-Manager] default cache updated: {uri}") + + a = get_cache("custom-node-list.json") + b = get_cache("extension-node-map.json") + c = get_cache("model-list.json") + d = get_cache("alter-list.json") + + await asyncio.gather(a, b, c, d) + + +threading.Thread(target=lambda: asyncio.run(default_cache_update())).start() + + +if not os.path.exists(config_path): + get_config() + write_config() + + +WEB_DIRECTORY = "js" +NODE_CLASS_MAPPINGS = {} +__all__ = ['NODE_CLASS_MAPPINGS'] + +cm_global.register_extension('ComfyUI-Manager', + {'version': version, + 'name': 'ComfyUI Manager', + 'nodes': {'Terminal Log //CM'}, + 'description': 'It provides the ability to manage custom nodes in ComfyUI.', }) + diff --git a/custom_nodes/ComfyUI-Manager/alter-list.json b/custom_nodes/ComfyUI-Manager/alter-list.json new file mode 100644 index 0000000000000000000000000000000000000000..7822ffc0785ed9c3a40ddd69a87f31e8abde2c44 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/alter-list.json @@ -0,0 +1,209 @@ +{ + "items": [ + { + "id":"https://github.com/Fannovel16/comfyui_controlnet_aux", + "tags":"controlnet", + "description": "This extension provides preprocessor nodes for using controlnet." + }, + { + "id":"https://github.com/comfyanonymous/ComfyUI_experiments", + "tags":"Dynamic Thresholding, DT, CFG, controlnet, reference only", + "description": "This experimental nodes contains a 'Reference Only' node and a 'ModelSamplerTonemapNoiseTest' node corresponding to the 'Dynamic Threshold'." + }, + { + "id":"https://github.com/ltdrdata/ComfyUI-Impact-Pack", + "tags":"ddetailer, adetailer, ddsd, DD, loopback scaler, prompt, wildcard, dynamic prompt", + "description": "To implement the feature of automatically detecting faces and enhancing details, various detection nodes and detailers provided by the Impact Pack can be applied. Similarly to Loopback Scaler, it also provides various custom workflows that can apply Ksampler while gradually scaling up." + }, + { + "id":"https://github.com/ltdrdata/ComfyUI-Inspire-Pack", + "tags":"lora block weight, effective block analyzer, lbw, variation seed", + "description": "The Inspire Pack provides the functionality of Lora Block Weight, Variation Seed." + }, + { + "id":"https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py", + "tags":"ddsd", + "description": "This extension provides a feature that generates segment masks on an image using a text prompt. When used in conjunction with Impact Pack, it enables applications such as DDSD." + }, + { + "id":"https://github.com/BadCafeCode/masquerade-nodes-comfyui", + "tags":"ddetailer", + "description": "This extension provides a way to recognize and enhance masks for faces similar to Impact Pack." + }, + { + "id":"https://github.com/BlenderNeko/ComfyUI_Cutoff", + "tags":"cutoff", + "description": "By using this extension, prompts like 'blue hair' can be prevented from interfering with other prompts by blocking the attribute 'blue' from being used in prompts other than 'hair'." + }, + { + "id":"https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb", + "tags":"prompt, weight", + "description": "There are differences in the processing methods of prompts, such as weighting and scheduling, between A1111 and ComfyUI. With this extension, various settings can be used to implement prompt processing methods similar to A1111. As this feature is also integrated into ComfyUI Cutoff, please download the Cutoff extension if you plan to use it in conjunction with Cutoff." + }, + { + "id":"https://github.com/shiimizu/ComfyUI_smZNodes", + "tags":"prompt, weight", + "description": "There are differences in the processing methods of prompts, such as weighting and scheduling, between A1111 and ComfyUI. This extension helps to reproduce the same embedding as A1111." + }, + { + "id":"https://github.com/BlenderNeko/ComfyUI_Noise", + "tags":"img2img alt, random", + "description": "The extension provides an unsampler that reverses the sampling process, allowing for a function similar to img2img alt to be implemented. Furthermore, ComfyUI uses CPU's Random instead of GPU's Random for better reproducibility compared to A1111. This extension provides the ability to use GPU's Random for Latent Noise. However, since GPU's Random may vary depending on the GPU model, reproducibility on different devices cannot be guaranteed." + }, + { + "id":"https://github.com/BlenderNeko/ComfyUI_SeeCoder", + "tags":"seecoder, prompt-free-diffusion", + "description": "The extension provides seecoder feature." + }, + { + "id":"https://github.com/lilly1987/ComfyUI_node_Lilly", + "tags":"prompt, wildcard", + "description": "This extension provides features such as a wildcard function that randomly selects prompts belonging to a category and the ability to directly load lora from prompts." + }, + { + "id":"https://github.com/Davemane42/ComfyUI_Dave_CustomNode", + "tags":"latent couple", + "description": "ComfyUI already provides the ability to composite latents by default. However, this extension makes it more convenient to use by visualizing the composite area." + }, + { + "id":"https://github.com/LEv145/images-grid-comfy-plugin", + "tags":"X/Y Plot", + "description": "This tool provides a viewer node that allows for checking multiple outputs in a grid, similar to the X/Y Plot extension." + }, + { + "id":"https://github.com/pythongosssss/ComfyUI-WD14-Tagger", + "tags":"deepbooru, clip interrogation", + "description": "This extension generates clip text by taking an image as input and using the Deepbooru model." + }, + { + "id":"https://github.com/szhublox/ambw_comfyui", + "tags":"supermerger", + "description": "This node takes two models, merges individual blocks together at various ratios, and automatically rates each merge, keeping the ratio with the highest score. " + }, + { + "id":"https://github.com/ssitu/ComfyUI_UltimateSDUpscale", + "tags":"upscaler, Ultimate SD Upscale", + "description": "ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Uses the same script used in the A1111 extension to hopefully replicate images generated using the A1111 webui." + }, + { + "id":"https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py", + "tags":"random, noise", + "description": "A1111 provides KSampler that uses GPU-based random noise. This extension offers KSampler utilizing GPU-based random noise." + }, + { + "id":"https://github.com/space-nuko/nui-suite", + "tags":"prompt, dynamic prompt", + "description": "This extension provides nodes with the functionality of dynamic prompts." + }, + { + "id":"https://github.com/melMass/comfy_mtb", + "tags":"roop", + "description": "This extension provides bunch of nodes including roop" + }, + { + "id":"https://github.com/ssitu/ComfyUI_roop", + "tags":"roop", + "description": "This extension provides nodes for the roop A1111 webui script." + }, + { + "id":"https://github.com/asagi4/comfyui-prompt-control", + "tags":"prompt, prompt editing", + "description": "This extension provides the ability to use prompts like \n\n**a [large::0.1] [cat|dog:0.05] [::0.5] [in a park:in space:0.4]**\n\n" + }, + { + "id":"https://github.com/adieyal/comfyui-dynamicprompts", + "tags":"prompt, dynamic prompt", + "description": "This extension is a port of sd-dynamic-prompt to ComfyUI." + }, + { + "id":"https://github.com/kwaroran/abg-comfyui", + "tags":"abg, background remover", + "description": "A Anime Background Remover node for comfyui, based on this hf space, works same as AGB extention in automatic1111." + }, + { + "id":"https://github.com/Gourieff/comfyui-reactor-node", + "tags":"reactor, sd-webui-roop-nsfw", + "description": "This is a ported version of ComfyUI for the sd-webui-roop-nsfw extension." + }, + { + "id":"https://github.com/laksjdjf/attention-couple-ComfyUI", + "tags":"regional prompt, latent couple, prompt", + "description": "This custom nodes provide a functionality similar to regional prompts, offering couple features at the attention level." + }, + { + "id":"https://github.com/FizzleDorf/ComfyUI_FizzNodes", + "tags":"deforum", + "description": "This custom nodes provide functionality that assists in animation creation, similar to deforum." + }, + { + "id":"https://github.com/seanlynch/comfyui-optical-flow", + "tags":"deforum, vid2vid", + "description": "This custom nodes provide functionality that assists in animation creation, similar to deforum." + }, + { + "id":"https://github.com/ssitu/ComfyUI_fabric", + "tags":"fabric", + "description": "Similar to sd-webui-fabric, this custom nodes provide the functionality of [a/FABRIC](https://github.com/sd-fabric/fabric)." + }, + { + "id":"https://github.com/Zuellni/ComfyUI-ExLlama", + "tags":"ExLlama, prompt, language model", + "description": "Similar to text-generation-webui, this custom nodes provide the functionality of [a/exllama](https://github.com/turboderp/exllama)." + }, + { + "id":"https://github.com/spinagon/ComfyUI-seamless-tiling", + "tags":"tiling", + "description": "ComfyUI node for generating seamless textures Replicates 'Tiling' option from A1111" + }, + { + "id":"https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI", + "tags":"cd-tuner, negpip", + "description": "This extension is a port of the [a/sd-webui-cd-tuner](https://github.com/hako-mikan/sd-webui-cd-tuner)(a.k.a. CD(color/Detail) Tuner )and [a/sd-webui-negpip](https://github.com/hako-mikan/sd-webui-negpip)(a.k.a. NegPiP) extensions of A1111 to ComfyUI." + }, + { + "id":"https://github.com/mcmonkeyprojects/sd-dynamic-thresholding", + "tags":"DT, dynamic thresholding", + "description": "This custom node is a port of the Dynamic Thresholding extension from A1111 to make it available for use in ComfyUI." + }, + { + "id":"https://github.com/hhhzzyang/Comfyui_Lama", + "tags":"lama, inpainting anything", + "description": "This extension provides custom nodes developed based on [a/LaMa](https://github.com/advimman/lama) and [a/Inpainting anything](https://github.com/geekyutao/Inpaint-Anything)." + }, + { + "id":"https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor", + "tags":"lama", + "description": "This extension provides custom nodes for [a/LaMa](https://github.com/advimman/lama) functionality." + }, + { + "id":"https://github.com/Haoming02/comfyui-diffusion-cg", + "tags":"diffusion-cg", + "description": "This extension provides custom nodes for [a/SD Webui Diffusion Color Grading](https://github.com/Haoming02/sd-webui-diffusion-cg) functionality." + }, + { + "id":"https://github.com/asagi4/ComfyUI-CADS", + "tags":"diffusion-cg", + "description": "This extension provides custom nodes for [a/sd-webui-cads](https://github.com/v0xie/sd-webui-cads) functionality." + }, + { + "id":"https://git.mmaker.moe/mmaker/sd-webui-color-enhance", + "tags":"color-enhance", + "description": "This extension supports both A1111 and ComfyUI simultaneously." + }, + { + "id":"https://github.com/shiimizu/ComfyUI-TiledDiffusion", + "tags":"multidiffusion", + "description": "This extension provides custom nodes for [a/Mixture of Diffusers](https://github.com/albarji/mixture-of-diffusers) and [a/MultiDiffusion](https://github.com/omerbt/MultiDiffusion)" + }, + { + "id":"https://github.com/abyz22/image_control", + "tags":"BMAB", + "description": "This extension provides some alternative functionalities of the [a/sd-webui-bmab](https://github.com/portu-sim/sd-webui-bmab) extension." + }, + { + "id":"https://github.com/blepping/ComfyUI-sonar", + "tags":"sonar", + "description": "This extension provides some alternative functionalities of the [a/stable-diffusion-webui-sonar](https://github.com/Kahsolt/stable-diffusion-webui-sonar) extension." + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/channels.list.template b/custom_nodes/ComfyUI-Manager/channels.list.template new file mode 100644 index 0000000000000000000000000000000000000000..9a8d6877b3b0f62be0f92b3ae81aea8337952cba --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/channels.list.template @@ -0,0 +1,6 @@ +default::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main +recent::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/new +legacy::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/legacy +forked::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/forked +dev::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/dev +tutorial::https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/node_db/tutorial \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/check.bat b/custom_nodes/ComfyUI-Manager/check.bat new file mode 100644 index 0000000000000000000000000000000000000000..e7a3b09fc58f749522aef0053f48e5a0a55cf955 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/check.bat @@ -0,0 +1,21 @@ +@echo off + +python json-checker.py "custom-node-list.json" +python json-checker.py "model-list.json" +python json-checker.py "alter-list.json" +python json-checker.py "extension-node-map.json" +python json-checker.py "node_db\new\custom-node-list.json" +python json-checker.py "node_db\new\model-list.json" +python json-checker.py "node_db\new\extension-node-map.json" +python json-checker.py "node_db\dev\custom-node-list.json" +python json-checker.py "node_db\dev\model-list.json" +python json-checker.py "node_db\dev\extension-node-map.json" +python json-checker.py "node_db\tutorial\custom-node-list.json" +python json-checker.py "node_db\tutorial\model-list.json" +python json-checker.py "node_db\tutorial\extension-node-map.json" +python json-checker.py "node_db\legacy\custom-node-list.json" +python json-checker.py "node_db\legacy\model-list.json" +python json-checker.py "node_db\legacy\extension-node-map.json" +python json-checker.py "node_db\forked\custom-node-list.json" +python json-checker.py "node_db\forked\model-list.json" +python json-checker.py "node_db\forked\extension-node-map.json" \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/check.sh b/custom_nodes/ComfyUI-Manager/check.sh new file mode 100755 index 0000000000000000000000000000000000000000..9260dbe1844921e9dbfe929cfcf97429e33f2723 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/check.sh @@ -0,0 +1,27 @@ +#!/bin/bash + +files=( + "custom-node-list.json" + "model-list.json" + "alter-list.json" + "extension-node-map.json" + "node_db/new/custom-node-list.json" + "node_db/new/model-list.json" + "node_db/new/extension-node-map.json" + "node_db/dev/custom-node-list.json" + "node_db/dev/model-list.json" + "node_db/dev/extension-node-map.json" + "node_db/tutorial/custom-node-list.json" + "node_db/tutorial/model-list.json" + "node_db/tutorial/extension-node-map.json" + "node_db/legacy/custom-node-list.json" + "node_db/legacy/model-list.json" + "node_db/legacy/extension-node-map.json" + "node_db/forked/custom-node-list.json" + "node_db/forked/model-list.json" + "node_db/forked/extension-node-map.json" +) + +for file in "${files[@]}"; do + python json-checker.py "$file" +done diff --git a/custom_nodes/ComfyUI-Manager/components/.gitignore b/custom_nodes/ComfyUI-Manager/components/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..fab7a5e2492a15845fc6e2c1d5723ebccd11f75e --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/components/.gitignore @@ -0,0 +1,2 @@ +*.json +*.pack diff --git a/custom_nodes/ComfyUI-Manager/config.ini b/custom_nodes/ComfyUI-Manager/config.ini new file mode 100644 index 0000000000000000000000000000000000000000..71612530f1322442800b3a3aca556b4367b7733f --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/config.ini @@ -0,0 +1,13 @@ +[default] +preview_method = none +badge_mode = none +git_exe = +channel_url = https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main +share_option = all +bypass_ssl = False +file_logging = True +default_ui = none +component_policy = workflow +double_click_policy = copy-all +windows_selector_event_loop_policy = False + diff --git a/custom_nodes/ComfyUI-Manager/custom-node-list.json b/custom_nodes/ComfyUI-Manager/custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..87c3c0c3bbdaa5a0b2e10fb2169b04d425435ae4 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/custom-node-list.json @@ -0,0 +1,5589 @@ +{ + "custom_nodes": [ + { + "author": "Dr.Lt.Data", + "title": "ComfyUI-Manager", + "reference": "https://github.com/ltdrdata/ComfyUI-Manager", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Manager" + ], + "install_type": "git-clone", + "description": "ComfyUI-Manager itself is also a custom node." + }, + { + "author": "Dr.Lt.Data", + "title": "ComfyUI Impact Pack", + "reference": "https://github.com/ltdrdata/ComfyUI-Impact-Pack", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Impact-Pack" + ], + "pip": ["ultralytics"], + "install_type": "git-clone", + "description": "This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler.\n[w/NOTE:'Segs & Mask' has been renamed to 'ImpactSegsAndMask.' Please replace the node with the new name.]" + }, + { + "author": "Dr.Lt.Data", + "title": "ComfyUI Inspire Pack", + "reference": "https://github.com/ltdrdata/ComfyUI-Inspire-Pack", + "nodename_pattern": "Inspire$", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Inspire-Pack" + ], + "install_type": "git-clone", + "description": "This extension provides various nodes to support Lora Block Weight and the Impact Pack. Provides many easily applicable regional features and applications for Variation Seed." + }, + { + "author": "comfyanonymous", + "title": "ComfyUI_experiments", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments" + ], + "install_type": "git-clone", + "description": "Nodes: ModelSamplerTonemapNoiseTest, TonemapNoiseWithRescaleCFG, ReferenceOnlySimple, RescaleClassifierFreeGuidanceTest, ModelMergeBlockNumber, ModelMergeSDXL, ModelMergeSDXLTransformers, ModelMergeSDXLDetailedTransformers.[w/NOTE: This is a consolidation of the previously separate custom nodes. Please delete the sampler_tonemap.py, sampler_rescalecfg.py, advanced_model_merging.py, sdxl_model_merging.py, and reference_only.py files installed in custom_nodes before.]" + }, + { + "author": "Stability-AI", + "title": "stability-ComfyUI-nodes", + "reference": "https://github.com/Stability-AI/stability-ComfyUI-nodes", + "files": [ + "https://github.com/Stability-AI/stability-ComfyUI-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: ColorBlend, ControlLoraSave, GetImageSize. NOTE: Control-LoRA recolor example uses these nodes." + }, + { + "author": "Fannovel16", + "title": "ComfyUI's ControlNet Auxiliary Preprocessors", + "reference": "https://github.com/Fannovel16/comfyui_controlnet_aux", + "files": [ + "https://github.com/Fannovel16/comfyui_controlnet_aux" + ], + "install_type": "git-clone", + "description": "This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. I think the old repo isn't good enough to maintain. All old workflow will still be work with this repo but the version option won't do anything. Almost all v1 preprocessors are replaced by v1.1 except those doesn't appear in v1.1. [w/NOTE: Please refrain from using the controlnet preprocessor alongside this installation, as it may lead to conflicts and prevent proper recognition.]" + }, + { + "author": "Fannovel16", + "title": "ComfyUI Frame Interpolation", + "reference": "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation", + "files": [ + "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation" + ], + "install_type": "git-clone", + "description": "Nodes: KSampler Gradually Adding More Denoise (efficient)" + }, + { + "author": "Fannovel16", + "title": "ComfyUI Loopchain", + "reference": "https://github.com/Fannovel16/ComfyUI-Loopchain", + "files": [ + "https://github.com/Fannovel16/ComfyUI-Loopchain" + ], + "install_type": "git-clone", + "description": "A collection of nodes which can be useful for animation in ComfyUI. The main focus of this extension is implementing a mechanism called loopchain. A loopchain in this case is the chain of nodes only executed repeatly in the workflow. If a node chain contains a loop node from this extension, it will become a loop chain." + }, + { + "author": "Fannovel16", + "title": "ComfyUI MotionDiff", + "reference": "https://github.com/Fannovel16/ComfyUI-MotionDiff", + "files": [ + "https://github.com/Fannovel16/ComfyUI-MotionDiff" + ], + "install_type": "git-clone", + "description": "Implementation of MDM, MotionDiffuse and ReMoDiffuse into ComfyUI." + }, + { + "author": "Fannovel16", + "title": "ComfyUI-Video-Matting", + "reference": "https://github.com/Fannovel16/ComfyUI-Video-Matting", + "files": [ + "https://github.com/Fannovel16/ComfyUI-Video-Matting" + ], + "install_type": "git-clone", + "description": "A minimalistic implementation of [a/Robust Video Matting (RVM)](https://github.com/PeterL1n/RobustVideoMatting/) in ComfyUI" + }, + { + "author": "biegert", + "title": "CLIPSeg", + "reference": "https://github.com/biegert/ComfyUI-CLIPSeg", + "files": [ + "https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py" + ], + "install_type": "copy", + "description": "The CLIPSeg node generates a binary mask for a given input image and text prompt." + }, + { + "author": "BlenderNeko", + "title": "ComfyUI Cutoff", + "reference": "https://github.com/BlenderNeko/ComfyUI_Cutoff", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_Cutoff" + ], + "install_type": "git-clone", + "description": "These custom nodes provides features that allow for better control over the effects of the text prompt." + }, + { + "author": "BlenderNeko", + "title": "Advanced CLIP Text Encode", + "reference": "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb" + ], + "install_type": "git-clone", + "description": "Advanced CLIP Text Encode (if you need A1111 like prompt. you need this. But Cutoff node includes this feature, already.)" + }, + { + "author": "BlenderNeko", + "title": "ComfyUI Noise", + "reference": "https://github.com/BlenderNeko/ComfyUI_Noise", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_Noise" + ], + "install_type": "git-clone", + "description": "This extension contains 6 nodes for ComfyUI that allows for more control and flexibility over the noise." + }, + { + "author": "BlenderNeko", + "title": "Tiled sampling for ComfyUI", + "reference": "https://github.com/BlenderNeko/ComfyUI_TiledKSampler", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_TiledKSampler" + ], + "install_type": "git-clone", + "description": "This extension contains a tiled sampler for ComfyUI. It allows for denoising larger images by splitting it up into smaller tiles and denoising these. It tries to minimize any seams for showing up in the end result by gradually denoising all tiles one step at the time and randomizing tile positions for every step." + }, + { + "author": "BlenderNeko", + "title": "SeeCoder [WIP]", + "reference": "https://github.com/BlenderNeko/ComfyUI_SeeCoder", + "files": [ + "https://github.com/BlenderNeko/ComfyUI_SeeCoder" + ], + "install_type": "git-clone", + "description": "It provides the capability to generate CLIP from an image input, unlike unCLIP, which works in all models. (To use this extension, you need to download the required model file from **Install Models**)" + }, + { + "author": "jags111", + "title": "Efficiency Nodes for ComfyUI Version 2.0+", + "reference": "https://github.com/jags111/efficiency-nodes-comfyui", + "files": [ + "https://github.com/jags111/efficiency-nodes-comfyui" + ], + "install_type": "git-clone", + "description": "A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count.[w/NOTE: This node is originally created by LucianoCirino, but the [a/original repository](https://github.com/LucianoCirino/efficiency-nodes-comfyui) is no longer maintained and has been forked by a new maintainer. To use the forked version, you should uninstall the original version and **REINSTALL** this one.]" + }, + { + "author": "jags111", + "title": "ComfyUI_Jags_VectorMagic", + "reference": "https://github.com/jags111/ComfyUI_Jags_VectorMagic", + "files": [ + "https://github.com/jags111/ComfyUI_Jags_VectorMagic" + ], + "install_type": "git-clone", + "description": "a collection of nodes to explore Vector and image manipulation" + }, + { + "author": "jags111", + "title": "ComfyUI_Jags_Audiotools", + "reference": "https://github.com/jags111/ComfyUI_Jags_Audiotools", + "files": [ + "https://github.com/jags111/ComfyUI_Jags_Audiotools" + ], + "install_type": "git-clone", + "description": "This extension offers various audio generation tools" + }, + { + "author": "Derfuu", + "title": "Derfuu_ComfyUI_ModdedNodes", + "reference": "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes", + "files": [ + "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes" + ], + "install_type": "git-clone", + "description": "Automate calculation depending on image sizes or something you want." + }, + { + "author": "paulo-coronado", + "title": "comfy_clip_blip_node", + "reference": "https://github.com/paulo-coronado/comfy_clip_blip_node", + "files": [ + "https://github.com/paulo-coronado/comfy_clip_blip_node" + ], + "install_type": "git-clone", + "apt_dependency": [ + "rustc", + "cargo" + ], + "description": "CLIPTextEncodeBLIP: This custom node provides a CLIP Encoder that is capable of receiving images as input." + }, + { + "author": "Davemane42", + "title": "Visual Area Conditioning / Latent composition", + "reference": "https://github.com/Davemane42/ComfyUI_Dave_CustomNode", + "files": [ + "https://github.com/Davemane42/ComfyUI_Dave_CustomNode" + ], + "install_type": "git-clone", + "description": "This tool provides custom nodes that allow visualization and configuration of area conditioning and latent composite." + }, + { + "author": "WASasquatch", + "title": "WAS Node Suite", + "reference": "https://github.com/WASasquatch/was-node-suite-comfyui", + "pip": ["numba"], + "files": [ + "https://github.com/WASasquatch/was-node-suite-comfyui" + ], + "install_type": "git-clone", + "description": "A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more." + }, + { + "author": "WASasquatch", + "title": "ComfyUI Preset Merger", + "reference": "https://github.com/WASasquatch/ComfyUI_Preset_Merger", + "files": [ + "https://github.com/WASasquatch/ComfyUI_Preset_Merger" + ], + "install_type": "git-clone", + "description": "Nodes: ModelMergeByPreset. Merge checkpoint models by preset" + }, + { + "author": "WASasquatch", + "title": "PPF_Noise_ComfyUI", + "reference": "https://github.com/WASasquatch/PPF_Noise_ComfyUI", + "files": [ + "https://github.com/WASasquatch/PPF_Noise_ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes: WAS_PFN_Latent. Perlin Power Fractal Noisey Latents" + }, + { + "author": "WASasquatch", + "title": "Power Noise Suite for ComfyUI", + "reference": "https://github.com/WASasquatch/PowerNoiseSuite", + "files": [ + "https://github.com/WASasquatch/PowerNoiseSuite" + ], + "install_type": "git-clone", + "description": "Power Noise Suite contains nodes centered around latent noise input, and diffusion, as well as latent adjustments." + }, + { + "author": "WASasquatch", + "title": "FreeU_Advanced", + "reference": "https://github.com/WASasquatch/FreeU_Advanced", + "files": [ + "https://github.com/WASasquatch/FreeU_Advanced" + ], + "install_type": "git-clone", + "description": "This custom node provides advanced settings for FreeU." + }, + { + "author": "WASasquatch", + "title": "ASTERR", + "reference": "https://github.com/WASasquatch/ASTERR", + "files": [ + "https://github.com/WASasquatch/ASTERR" + ], + "install_type": "git-clone", + "description": "Abstract Syntax Trees Evaluated Restricted Run (ASTERR) is a Python Script executor for ComfyUI. [w/Warning:ASTERR runs Python Code from a Web Interface! It is highly recommended to run this in a closed-off environment, as it could have potential security risks.]" + }, + { + "author": "WASasquatch", + "title": "WAS_Extras", + "reference": "https://github.com/WASasquatch/WAS_Extras", + "files": [ + "https://github.com/WASasquatch/WAS_Extras" + ], + "install_type": "git-clone", + "description": "Nodes:Conditioning (Blend), Inpainting VAE Encode (WAS), VividSharpen. Experimental nodes, or other random extra helper nodes." + }, + { + "author": "omar92", + "title": "Quality of life Suit:V2", + "reference": "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92", + "files": [ + "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92" + ], + "install_type": "git-clone", + "description": "openAI suite, String suite, Latent Tools, Image Tools: These custom nodes provide expanded functionality for image and string processing, latent processing, as well as the ability to interface with models such as ChatGPT/DallE-2.\nNOTE: Currently, this extension does not support the new OpenAI API, leading to compatibility issues." + }, + { + "author": "lilly1987", + "title": "simple wildcard for ComfyUI", + "reference": "https://github.com/lilly1987/ComfyUI_node_Lilly", + "files": [ + "https://github.com/lilly1987/ComfyUI_node_Lilly" + ], + "install_type": "git-clone", + "description": "These custom nodes provides a feature to insert arbitrary inputs through wildcards in the prompt. Additionally, this tool provides features that help simplify workflows, such as VAELoaderDecoder and SimplerSample." + }, + { + "author": "sylym", + "title": "Vid2vid", + "reference": "https://github.com/sylym/comfy_vid2vid", + "files": [ + "https://github.com/sylym/comfy_vid2vid" + ], + "install_type": "git-clone", + "description": "A node suite for ComfyUI that allows you to load image sequence and generate new image sequence with different styles or content." + }, + { + "author": "EllangoK", + "title": "ComfyUI-post-processing-nodes", + "reference": "https://github.com/EllangoK/ComfyUI-post-processing-nodes", + "files": [ + "https://github.com/EllangoK/ComfyUI-post-processing-nodes" + ], + "install_type": "git-clone", + "description": "A collection of post processing nodes for ComfyUI, simply download this repo and drag." + }, + { + "author": "LEv145", + "title": "ImagesGrid", + "reference": "https://github.com/LEv145/images-grid-comfy-plugin", + "files": [ + "https://github.com/LEv145/images-grid-comfy-plugin" + ], + "install_type": "git-clone", + "description": "This tool provides a viewer node that allows for checking multiple outputs in a grid, similar to the X/Y Plot extension." + }, + { + "author": "diontimmer", + "title": "ComfyUI-Vextra-Nodes", + "reference": "https://github.com/diontimmer/ComfyUI-Vextra-Nodes", + "files": [ + "https://github.com/diontimmer/ComfyUI-Vextra-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Pixel Sort, Swap Color Mode, Solid Color, Glitch This, Add Text To Image, Play Sound, Prettify Prompt, Generate Noise, Flatten Colors" + }, + { + "author": "CYBERLOOM-INC", + "title": "ComfyUI-nodes-hnmr", + "reference": "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr", + "files": [ + "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr" + ], + "install_type": "git-clone", + "description": "Provide various custom nodes for Latent, Sampling, Model, Loader, Image, Text. This is the fixed version of the original [a/ComfyUI-nodes-hnmr](https://github.com/hnmr293/ComfyUI-nodes-hnmr) by hnmr293." + }, + { + "author": "BadCafeCode", + "title": "Masquerade Nodes", + "reference": "https://github.com/BadCafeCode/masquerade-nodes-comfyui", + "files": [ + "https://github.com/BadCafeCode/masquerade-nodes-comfyui" + ], + "install_type": "git-clone", + "description": "This is a node pack for ComfyUI, primarily dealing with masks." + }, + { + "author": "guoyk93", + "title": "y.k.'s ComfyUI node suite", + "reference": "https://github.com/guoyk93/yk-node-suite-comfyui", + "files": [ + "https://github.com/guoyk93/yk-node-suite-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes: YKImagePadForOutpaint, YKMaskToImage" + }, + { + "author": "Jcd1230", + "title": "Rembg Background Removal Node for ComfyUI", + "reference": "https://github.com/Jcd1230/rembg-comfyui-node", + "files": [ + "https://github.com/Jcd1230/rembg-comfyui-node" + ], + "install_type": "git-clone", + "description": "Nodes: Image Remove Background (rembg)" + }, + { + "author": "YinBailiang", + "title": "MergeBlockWeighted_fo_ComfyUI", + "reference": "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI", + "files": [ + "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes: MergeBlockWeighted" + }, + { + "author": "trojblue", + "title": "trNodes", + "reference": "https://github.com/trojblue/trNodes", + "files": [ + "https://github.com/trojblue/trNodes" + ], + "install_type": "git-clone", + "description": "Nodes: image_layering, color_correction, model_router" + }, + { + "author": "szhublox", + "title": "Auto-MBW", + "reference": "https://github.com/szhublox/ambw_comfyui", + "files": [ + "https://github.com/szhublox/ambw_comfyui" + ], + "install_type": "git-clone", + "description": "Auto-MBW for ComfyUI loosely based on sdweb-auto-MBW. Nodes: auto merge block weighted" + }, + { + "author": "city96", + "title": "ComfyUI_NetDist", + "reference": "https://github.com/city96/ComfyUI_NetDist", + "files": [ + "https://github.com/city96/ComfyUI_NetDist" + ], + "install_type": "git-clone", + "description": "Run ComfyUI workflows on multiple local GPUs/networked machines. Nodes: Remote images, Local Remote control" + }, + { + "author": "city96", + "title": "Latent-Interposer", + "reference": "https://github.com/city96/SD-Latent-Interposer", + "files": [ + "https://github.com/city96/SD-Latent-Interposer" + ], + "install_type": "git-clone", + "description": "Custom node to convert the lantents between SDXL and SD v1.5 directly without the VAE decoding/encoding step." + }, + { + "author": "city96", + "title": "SD-Advanced-Noise", + "reference": "https://github.com/city96/SD-Advanced-Noise", + "files": [ + "https://github.com/city96/SD-Advanced-Noise" + ], + "install_type": "git-clone", + "description": "Nodes: LatentGaussianNoise, MathEncode. An experimental custom node that generates latent noise directly by utilizing the linear characteristics of the latent space." + }, + { + "author": "city96", + "title": "SD-Latent-Upscaler", + "reference": "https://github.com/city96/SD-Latent-Upscaler", + "files": [ + "https://github.com/city96/SD-Latent-Upscaler" + ], + "pip": ["huggingface-hub"], + "install_type": "git-clone", + "description": "Upscaling stable diffusion latents using a small neural network." + }, + { + "author": "city96", + "title": "ComfyUI_DiT [WIP]", + "reference": "https://github.com/city96/ComfyUI_DiT", + "files": [ + "https://github.com/city96/ComfyUI_DiT" + ], + "pip": ["huggingface-hub"], + "install_type": "git-clone", + "description": "Testbed for [a/DiT(Scalable Diffusion Models with Transformers)](https://github.com/facebookresearch/DiT). [w/None of this code is stable, expect breaking changes if for some reason you want to use this.]" + }, + { + "author": "city96", + "title": "ComfyUI_ColorMod", + "reference": "https://github.com/city96/ComfyUI_ColorMod", + "files": [ + "https://github.com/city96/ComfyUI_ColorMod" + ], + "install_type": "git-clone", + "description": "This extension currently has two sets of nodes - one set for editing the contrast/color of images and another set for saving images as 16 bit PNG files." + }, + { + "author": "city96", + "title": "Extra Models for ComfyUI", + "reference": "https://github.com/city96/ComfyUI_ExtraModels", + "files": [ + "https://github.com/city96/ComfyUI_ExtraModels" + ], + "install_type": "git-clone", + "description": "This extension aims to add support for various random image diffusion models to ComfyUI." + }, + { + "author": "Kaharos94", + "title": "ComfyUI-Saveaswebp", + "reference": "https://github.com/Kaharos94/ComfyUI-Saveaswebp", + "files": [ + "https://github.com/Kaharos94/ComfyUI-Saveaswebp" + ], + "install_type": "git-clone", + "description": "Save a picture as Webp file in Comfy + Workflow loading" + }, + { + "author": "SLAPaper", + "title": "ComfyUI-Image-Selector", + "reference": "https://github.com/SLAPaper/ComfyUI-Image-Selector", + "files": [ + "https://github.com/SLAPaper/ComfyUI-Image-Selector" + ], + "install_type": "git-clone", + "description": "A custom node for ComfyUI, which can select one or some of images from a batch." + }, + { + "author": "flyingshutter", + "title": "As_ComfyUI_CustomNodes", + "reference": "https://github.com/flyingshutter/As_ComfyUI_CustomNodes", + "files": [ + "https://github.com/flyingshutter/As_ComfyUI_CustomNodes" + ], + "install_type": "git-clone", + "description": "Manipulation nodes for Image, Latent" + }, + { + "author": "Zuellni", + "title": "Zuellni/ComfyUI-Custom-Nodes", + "reference": "https://github.com/Zuellni/ComfyUI-Custom-Nodes", + "files": [ + "https://github.com/Zuellni/ComfyUI-Custom-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes: DeepFloyd, Filter, Select, Save, Decode, Encode, Repeat, Noise, Noise" + }, + { + "author": "Zuellni", + "title": "ComfyUI-ExLlama", + "reference": "https://github.com/Zuellni/ComfyUI-ExLlama", + "files": [ + "https://github.com/Zuellni/ComfyUI-ExLlama" + ], + "pip": ["sentencepiece", "https://github.com/jllllll/exllama/releases/download/0.0.17/exllama-0.0.17+cu118-cp310-cp310-win_amd64.whl"], + "install_type": "git-clone", + "description": "Nodes: ExLlama Loader, ExLlama Generator.\nUsed to load 4-bit GPTQ Llama/2 models. You can find a lot of them over at [a/https://huggingface.co/TheBloke](https://huggingface.co/TheBloke)[w/NOTE: You need to manually install a pip package that suits your system. For example. If your system is 'Python3.10 + Windows + CUDA 11.8' then you need to install 'exllama-0.0.17+cu118-cp310-cp310-win_amd64.whl'. Available package files are [a/here](https://github.com/jllllll/exllama/releases)]" + }, + { + "author": "Zuellni", + "title": "ComfyUI PickScore Nodes", + "reference": "https://github.com/Zuellni/ComfyUI-PickScore-Nodes", + "files": [ + "https://github.com/Zuellni/ComfyUI-PickScore-Nodes" + ], + "install_type": "git-clone", + "description": "Image scoring nodes for ComfyUI using PickScore with a batch of images to predict which ones fit a given prompt the best." + }, + { + "author": "AlekPet", + "title": "AlekPet/ComfyUI_Custom_Nodes_AlekPet", + "reference": "https://github.com/AlekPet/ComfyUI_Custom_Nodes_AlekPet", + "files": [ + "https://github.com/AlekPet/ComfyUI_Custom_Nodes_AlekPet" + ], + "install_type": "git-clone", + "description": "Nodes: PoseNode, PainterNode, TranslateTextNode, TranslateCLIPTextEncodeNode, DeepTranslatorTextNode, DeepTranslatorCLIPTextEncodeNode, ArgosTranslateTextNode, ArgosTranslateCLIPTextEncodeNode, PreviewTextNode.\n\nNOTE: Due to the dynamic nature of node name definitions, ComfyUI-Manager cannot recognize the node list from this extension. The Missing nodes and Badge features are not available for this extension." + }, + { + "author": "pythongosssss", + "title": "ComfyUI WD 1.4 Tagger", + "reference": "https://github.com/pythongosssss/ComfyUI-WD14-Tagger", + "files": [ + "https://github.com/pythongosssss/ComfyUI-WD14-Tagger" + ], + "install_type": "git-clone", + "description": "A ComfyUI extension allowing the interrogation of booru tags from images." + }, + { + "author": "pythongosssss", + "title": "pythongosssss/ComfyUI-Custom-Scripts", + "reference": "https://github.com/pythongosssss/ComfyUI-Custom-Scripts", + "files": [ + "https://github.com/pythongosssss/ComfyUI-Custom-Scripts" + ], + "install_type": "git-clone", + "description": "This extension provides: Auto Arrange Graph, Workflow SVG, Favicon Status, Image Feed, Latent Upscale By, Lock Nodes & Groups, Lora Subfolders, Preset Text, Show Text, Touch Support, Link Render Mode, Locking, Node Finder, Quick Nodes, Show Image On Menu, Show Text, Workflow Managements, Custom Widget Default Values" + }, + { + "author": "strimmlarn", + "title": "ComfyUI_Strimmlarns_aesthetic_score", + "reference": "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score", + "js_path": "strimmlarn", + "files": [ + "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score" + ], + "install_type": "git-clone", + "description": "Nodes: CalculateAestheticScore, LoadAesteticModel, AesthetlcScoreSorter, ScoreToNumber" + }, + { + "author": "tinyterra", + "title": "tinyterraNodes", + "reference": "https://github.com/tinyterra/ComfyUI_tinyterraNodes", + "files": [ + "https://github.com/TinyTerra/ComfyUI_tinyterraNodes" + ], + "install_type": "git-clone", + "nodename_pattern": "^ttN ", + "description": "This extension offers various pipe nodes, fullscreen image viewer based on node history, dynamic widgets, interface customization, and more." + }, + { + "author": "Jordach", + "title": "comfy-plasma", + "reference": "https://github.com/Jordach/comfy-plasma", + "files": [ + "https://github.com/Jordach/comfy-plasma" + ], + "install_type": "git-clone", + "description": "Nodes: Plasma Noise, Random Noise, Greyscale Noise, Pink Noise, Brown Noise, Plasma KSampler" + }, + { + "author": "bvhari", + "title": "ImageProcessing", + "reference": "https://github.com/bvhari/ComfyUI_ImageProcessing", + "files": [ + "https://github.com/bvhari/ComfyUI_ImageProcessing" + ], + "install_type": "git-clone", + "description": "ComfyUI custom nodes to apply various image processing techniques." + }, + { + "author": "bvhari", + "title": "LatentToRGB", + "reference": "https://github.com/bvhari/ComfyUI_LatentToRGB", + "files": [ + "https://github.com/bvhari/ComfyUI_LatentToRGB" + ], + "install_type": "git-clone", + "description": "ComfyUI custom node to convert latent to RGB." + }, + { + "author": "bvhari", + "title": "ComfyUI_PerpWeight", + "reference": "https://github.com/bvhari/ComfyUI_PerpWeight", + "files": [ + "https://github.com/bvhari/ComfyUI_PerpWeight" + ], + "install_type": "git-clone", + "description": "A novel weighting scheme for token vectors from CLIP. Allows a wider range of values for the weight. Inspired by Perp-Neg." + }, + { + "author": "ssitu", + "title": "UltimateSDUpscale", + "reference": "https://github.com/ssitu/ComfyUI_UltimateSDUpscale", + "files": [ + "https://github.com/ssitu/ComfyUI_UltimateSDUpscale" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A." + }, + { + "author": "ssitu", + "title": "NestedNodeBuilder", + "reference": "https://github.com/ssitu/ComfyUI_NestedNodeBuilder", + "files": [ + "https://github.com/ssitu/ComfyUI_NestedNodeBuilder" + ], + "install_type": "git-clone", + "description": "This extension provides the ability to combine multiple nodes into a single node." + }, + { + "author": "ssitu", + "title": "Restart Sampling", + "reference": "https://github.com/ssitu/ComfyUI_restart_sampling", + "files": [ + "https://github.com/ssitu/ComfyUI_restart_sampling" + ], + "install_type": "git-clone", + "description": "Unofficial ComfyUI nodes for restart sampling based on the paper 'Restart Sampling for Improving Generative Processes' ([a/paper](https://arxiv.org/abs/2306.14878), [a/repo](https://github.com/Newbeeer/diffusion_restart_sampling))" + }, + { + "author": "ssitu", + "title": "ComfyUI roop", + "reference": "https://github.com/ssitu/ComfyUI_roop", + "files": [ + "https://github.com/ssitu/ComfyUI_roop" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes for the roop A1111 webui script." + }, + { + "author": "ssitu", + "title": "ComfyUI fabric", + "reference": "https://github.com/ssitu/ComfyUI_fabric", + "files": [ + "https://github.com/ssitu/ComfyUI_fabric" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes based on the paper [a/FABRIC: Personalizing Diffusion Models with Iterative Feedback](https://arxiv.org/abs/2307.10159) (Feedback via Attention-Based Reference Image Conditioning)" + }, + { + "author": "space-nuko", + "title": "Disco Diffusion", + "reference": "https://github.com/space-nuko/ComfyUI-Disco-Diffusion", + "files": [ + "https://github.com/space-nuko/ComfyUI-Disco-Diffusion" + ], + "install_type": "git-clone", + "description": "Modularized version of Disco Diffusion for use with ComfyUI." + }, + { + "author": "space-nuko", + "title": "OpenPose Editor", + "reference": "https://github.com/space-nuko/ComfyUI-OpenPose-Editor", + "files": [ + "https://github.com/space-nuko/ComfyUI-OpenPose-Editor" + ], + "install_type": "git-clone", + "description": "A port of the openpose-editor extension for stable-diffusion-webui. NOTE: Requires [a/this ComfyUI patch](https://github.com/comfyanonymous/ComfyUI/pull/711) to work correctly" + }, + { + "author": "space-nuko", + "title": "nui suite", + "reference": "https://github.com/space-nuko/nui-suite", + "files": [ + "https://github.com/space-nuko/nui-suite" + ], + "install_type": "git-clone", + "description": "NODES: Dynamic Prompts Text Encode, Feeling Lucky Text Encode, Output String" + }, + { + "author": "Nourepide", + "title": "Allor Plugin", + "reference": "https://github.com/Nourepide/ComfyUI-Allor", + "files": [ + "https://github.com/Nourepide/ComfyUI-Allor" + ], + "install_type": "git-clone", + "description": "Allor is a plugin for ComfyUI with an emphasis on transparency and performance.\n[w/NOTE: If you do not disable the default node override feature in the settings, the built-in nodes, namely ImageScale and ImageScaleBy nodes, will be disabled. (ref: [a/Configutation](https://github.com/Nourepide/ComfyUI-Allor#configuration))]" + }, + { + "author": "melMass", + "title": "MTB Nodes", + "reference": "https://github.com/melMass/comfy_mtb", + "files": [ + "https://github.com/melMass/comfy_mtb" + ], + "nodename_pattern": "\\(mtb\\)$", + "install_type": "git-clone", + "description": "NODES: Face Swap, Film Interpolation, Latent Lerp, Int To Number, Bounding Box, Crop, Uncrop, ImageBlur, Denoise, ImageCompare, RGV to HSV, HSV to RGB, Color Correct, Modulo, Deglaze Image, Smart Step, ..." + }, + { + "author": "xXAdonesXx", + "title": "NodeGPT", + "reference": "https://github.com/xXAdonesXx/NodeGPT", + "files": [ + "https://github.com/xXAdonesXx/NodeGPT" + ], + "install_type": "git-clone", + "description": "Implementation of AutoGen inside ComfyUI. This repository is under development, and not everything is functioning correctly yet." + }, + { + "author": "Suzie1", + "title": "ComfyUI_Comfyroll_CustomNodes", + "reference": "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes", + "files": [ + "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes" + ], + "install_type": "git-clone", + "description": "Custom nodes for SDXL and SD1.5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. NOTE: Maintainer is changed to Suzie1 from RockOfFire. [w/Using an outdated version has resulted in reported issues with updates not being applied. Trying to reinstall the software is advised.]" + }, + { + "author": "bmad4ever", + "title": "ComfyUI-Bmad-DirtyUndoRedo", + "reference": "https://github.com/bmad4ever/ComfyUI-Bmad-DirtyUndoRedo", + "files": [ + "https://github.com/bmad4ever/ComfyUI-Bmad-DirtyUndoRedo" + ], + "install_type": "git-clone", + "description": "ComfyUI extension that adds undo (and redo) functionality." + }, + { + "author": "bmad4ever", + "title": "Bmad Nodes", + "reference": "https://github.com/bmad4ever/comfyui_bmad_nodes", + "files": [ + "https://github.com/bmad4ever/comfyui_bmad_nodes" + ], + "install_type": "git-clone", + "description": "This custom node offers the following functionalities: API support for setting up API requests, computer vision primarily for masking or collages, and general utility to streamline workflow setup or implement essential missing features." + }, + { + "author": "bmad4ever", + "title": "comfyui_ab_sampler", + "reference": "https://github.com/bmad4ever/comfyui_ab_samplercustom", + "files": [ + "https://github.com/bmad4ever/comfyui_ab_samplercustom" + ], + "install_type": "git-clone", + "description": "Experimental sampler node. Sampling alternates between A and B inputs until only one remains, starting with A. B steps run over a 2x2 grid, where 3/4's of the grid are copies of the original input latent. When the optional mask is used, the region outside the defined roi is copied from the original latent at the end of every step." + }, + { + "author": "bmad4ever", + "title": "Lists Cartesian Product", + "reference": "https://github.com/bmad4ever/comfyui_lists_cartesian_product", + "files": [ + "https://github.com/bmad4ever/comfyui_lists_cartesian_product" + ], + "install_type": "git-clone", + "description": "Given a set of lists, the node adjusts them so that when used as input to another node all the possible argument permutations are computed." + }, + { + "author": "FizzleDorf", + "title": "FizzNodes", + "reference": "https://github.com/FizzleDorf/ComfyUI_FizzNodes", + "files": [ + "https://github.com/FizzleDorf/ComfyUI_FizzNodes" + ], + "install_type": "git-clone", + "description": "Scheduled prompts, scheduled float/int values and wave function nodes for animations and utility. compatable with [a/framesync](https://www.framesync.xyz/) and [a/keyframe-string-generator](https://www.chigozie.co.uk/keyframe-string-generator/) for audio synced animations in Comfyui." + }, + { + "author": "FizzleDorf", + "title": "ComfyUI-AIT", + "reference": "https://github.com/FizzleDorf/ComfyUI-AIT", + "files": [ + "https://github.com/FizzleDorf/ComfyUI-AIT" + ], + "install_type": "git-clone", + "description": "A ComfyUI implementation of Facebook Meta's [a/AITemplate](https://github.com/facebookincubator/AITemplate) repo for faster inference using cpp/cuda. This new repo is behind the old version but is a much more stable foundation to keep AIT online. Please be patient as the repo will eventually include the same features as before.\nNOTE: You can find the old AIT extension in the legacy channel." + }, + { + "author": "filipemeneses", + "title": "Pixelization", + "reference": "https://github.com/filipemeneses/comfy_pixelization", + "files": [ + "https://github.com/filipemeneses/comfy_pixelization" + ], + "install_type": "git-clone", + "description": "ComfyUI node that pixelizes images." + }, + { + "author": "shiimizu", + "title": "smZNodes", + "reference": "https://github.com/shiimizu/ComfyUI_smZNodes", + "files": [ + "https://github.com/shiimizu/ComfyUI_smZNodes" + ], + "install_type": "git-clone", + "description": "NODES: CLIP Text Encode++. Achieve identical embeddings from stable-diffusion-webui for ComfyUI." + }, + { + "author": "shiimizu", + "title": "Tiled Diffusion & VAE for ComfyUI", + "reference": "https://github.com/shiimizu/ComfyUI-TiledDiffusion", + "files": [ + "https://github.com/shiimizu/ComfyUI-TiledDiffusion" + ], + "install_type": "git-clone", + "description": "The extension enables large image drawing & upscaling with limited VRAM via the following techniques:\n1.Two SOTA diffusion tiling algorithms: [a/Mixture of Diffusers](https://github.com/albarji/mixture-of-diffusers) and [a/MultiDiffusion](https://github.com/omerbt/MultiDiffusion)\n2.pkuliyi2015's Tiled VAE algorithm." + }, + { + "author": "ZaneA", + "title": "ImageReward", + "reference": "https://github.com/ZaneA/ComfyUI-ImageReward", + "files": [ + "https://github.com/ZaneA/ComfyUI-ImageReward" + ], + "install_type": "git-clone", + "description": "NODES: ImageRewardLoader, ImageRewardScore" + }, + { + "author": "SeargeDP", + "title": "SeargeSDXL", + "reference": "https://github.com/SeargeDP/SeargeSDXL", + "files": [ + "https://github.com/SeargeDP/SeargeSDXL" + ], + "install_type": "git-clone", + "description": "Custom nodes for easier use of SDXL in ComfyUI including an img2img workflow that utilizes both the base and refiner checkpoints." + }, + { + "author": "cubiq", + "title": "Simple Math", + "reference": "https://github.com/cubiq/ComfyUI_SimpleMath", + "files": [ + "https://github.com/cubiq/ComfyUI_SimpleMath" + ], + "install_type": "git-clone", + "description": "custom node for ComfyUI to perform simple math operations" + }, + { + "author": "cubiq", + "title": "ComfyUI_IPAdapter_plus", + "reference": "https://github.com/cubiq/ComfyUI_IPAdapter_plus", + "files": [ + "https://github.com/cubiq/ComfyUI_IPAdapter_plus" + ], + "pip": ["insightface"], + "install_type": "git-clone", + "description": "ComfyUI reference implementation for IPAdapter models. The code is mostly taken from the original IPAdapter repository and laksjdjf's implementation, all credit goes to them. I just made the extension closer to ComfyUI philosophy." + }, + { + "author": "cubiq", + "title": "ComfyUI InstantID (Native Support)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID", + "files": [ + "https://github.com/cubiq/ComfyUI_InstantID" + ], + "install_type": "git-clone", + "description": "Native [a/InstantID](https://github.com/InstantID/InstantID) support for ComfyUI.\nThis extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI.\nPlease note this still could be considered beta stage, looking forward to your feedback." + }, + { + "author": "shockz0rz", + "title": "InterpolateEverything", + "reference": "https://github.com/shockz0rz/ComfyUI_InterpolateEverything", + "files": [ + "https://github.com/shockz0rz/ComfyUI_InterpolateEverything" + ], + "install_type": "git-clone", + "description": "Nodes: Interpolate Poses, Interpolate Lineart, ... Custom nodes for interpolating between, well, everything in the Stable Diffusion ComfyUI." + }, + { + "author": "shockz0rz", + "title": "comfy-easy-grids", + "reference": "https://github.com/shockz0rz/comfy-easy-grids", + "files": [ + "https://github.com/shockz0rz/comfy-easy-grids" + ], + "install_type": "git-clone", + "description": "A set of custom nodes for creating image grids, sequences, and batches in ComfyUI." + }, + { + "author": "yolanother", + "title": "Comfy UI Prompt Agent", + "reference": "https://github.com/yolanother/DTAIComfyPromptAgent", + "files": [ + "https://github.com/yolanother/DTAIComfyPromptAgent" + ], + "install_type": "git-clone", + "description": "Nodes: Prompt Agent, Prompt Agent (String). This script provides a prompt agent node for the Comfy UI stable diffusion client." + }, + { + "author": "yolanother", + "title": "Image to Text Node", + "reference": "https://github.com/yolanother/DTAIImageToTextNode", + "files": [ + "https://github.com/yolanother/DTAIImageToTextNode" + ], + "install_type": "git-clone", + "description": "Nodes: Image URL to Text, Image to Text." + }, + { + "author": "yolanother", + "title": "Comfy UI Online Loaders", + "reference": "https://github.com/yolanother/DTAIComfyLoaders", + "files": [ + "https://github.com/yolanother/DTAIComfyLoaders" + ], + "install_type": "git-clone", + "description": "Nodes: Submit Image (Parameters), Submit Image. A collection of loaders that use a shared common online data source rather than relying on the files to be present locally." + }, + { + "author": "yolanother", + "title": "Comfy AI DoubTech.ai Image Sumission Node", + "reference": "https://github.com/yolanother/DTAIComfyImageSubmit", + "files": [ + "https://github.com/yolanother/DTAIComfyImageSubmit" + ], + "install_type": "git-clone", + "description": "A ComfyAI submit node to upload images to DoubTech.ai" + }, + { + "author": "yolanother", + "title": "Comfy UI QR Codes", + "reference": "https://github.com/yolanother/DTAIComfyQRCodes", + "files": [ + "https://github.com/yolanother/DTAIComfyQRCodes" + ], + "install_type": "git-clone", + "description": "This extension introduces QR code nodes for the Comfy UI stable diffusion client. NOTE: ComfyUI qrcode extension required." + }, + { + "author": "yolanother", + "title": "Variables for Comfy UI", + "reference": "https://github.com/yolanother/DTAIComfyVariables", + "files": [ + "https://github.com/yolanother/DTAIComfyVariables" + ], + "install_type": "git-clone", + "description": "Nodes: String, Int, Float, Short String, CLIP Text Encode (With Variables), String Format, Short String Format. This extension introduces quality of life improvements by providing variable nodes and shared global variables." + }, + { + "author": "sipherxyz", + "title": "comfyui-art-venture", + "reference": "https://github.com/sipherxyz/comfyui-art-venture", + "files": [ + "https://github.com/sipherxyz/comfyui-art-venture" + ], + "install_type": "git-clone", + "description": "Nodes: ImagesConcat, LoadImageFromUrl, AV_UploadImage" + }, + { + "author": "SOELexicon", + "title": "LexMSDBNodes", + "reference": "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes", + "files": [ + "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes" + ], + "install_type": "git-clone", + "description": "Nodes: MSSqlTableNode, MSSqlSelectNode. This extension provides custom nodes to interact with MSSQL." + }, + { + "author": "pants007", + "title": "pants", + "reference": "https://github.com/pants007/comfy-pants", + "files": [ + "https://github.com/pants007/comfy-pants" + ], + "install_type": "git-clone", + "description": "Nodes: Make Square Node, Interrogate Node, TextEncodeAIO" + }, + { + "author": "evanspearman", + "title": "ComfyMath", + "reference": "https://github.com/evanspearman/ComfyMath", + "files": [ + "https://github.com/evanspearman/ComfyMath" + ], + "install_type": "git-clone", + "description": "Provides Math Nodes for ComfyUI. Boolean Logic, Integer Arithmetic, Floating Point Arithmetic and Functions, Vec2, Vec3, and Vec4 Arithmetic and Functions" + }, + { + "author": "civitai", + "title": "comfy-nodes", + "reference": "https://github.com/civitai/comfy-nodes", + "files": [ + "https://github.com/civitai/comfy-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: CivitAI_Loaders. Load Checkpoints, and LORA models directly from CivitAI API." + }, + { + "author": "andersxa", + "title": "CLIP Directional Prompt Attention", + "reference": "https://github.com/andersxa/comfyui-PromptAttention", + "files": [ + "https://github.com/andersxa/comfyui-PromptAttention" + ], + "pip": ["scikit-learn", "matplotlib"], + "install_type": "git-clone", + "description": "Nodes: CLIP Directional Prompt Attention Encode. Direction prompt attention tries to solve the problem of contextual words (or parts of the prompt) having an effect on much later or irrelevant parts of the prompt." + }, + { + "author": "ArtVentureX", + "title": "AnimateDiff", + "reference": "https://github.com/ArtVentureX/comfyui-animatediff", + "pip": ["flash_attn"], + "files": [ + "https://github.com/ArtVentureX/comfyui-animatediff" + ], + "install_type": "git-clone", + "description": "AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff.\n[w/You only need to download one of [a/mm_sd_v14.ckpt](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt) | [a/mm_sd_v15.ckpt](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt). Put the model weights under %%ComfyUI/custom_nodes/comfyui-animatediff/models%%. DO NOT change model filename.]" + }, + { + "author": "twri", + "title": "SDXL Prompt Styler", + "reference": "https://github.com/twri/sdxl_prompt_styler", + "files": [ + "https://github.com/twri/sdxl_prompt_styler" + ], + "install_type": "git-clone", + "description": "SDXL Prompt Styler is a node that enables you to style prompts based on predefined templates stored in a JSON file." + }, + { + "author": "wolfden", + "title": "SDXL Prompt Styler (customized version by wolfden)", + "reference": "https://github.com/wolfden/ComfyUi_PromptStylers", + "files": [ + "https://github.com/wolfden/ComfyUi_PromptStylers" + ], + "install_type": "git-clone", + "description": "These custom nodes provide a variety of customized prompt stylers based on [a/twri/SDXL Prompt Styler](https://github.com/twri/sdxl_prompt_styler)." + }, + { + "author": "wolfden", + "title": "ComfyUi_String_Function_Tree", + "reference": "https://github.com/wolfden/ComfyUi_String_Function_Tree", + "files": [ + "https://github.com/wolfden/ComfyUi_String_Function_Tree" + ], + "install_type": "git-clone", + "description": "This custom node provides the capability to manipulate multiple string inputs." + }, + { + "author": "daxthin", + "title": "DZ-FaceDetailer", + "reference": "https://github.com/daxthin/DZ-FaceDetailer", + "files": [ + "https://github.com/daxthin/DZ-FaceDetailer" + ], + "install_type": "git-clone", + "description": "Face Detailer is a custom node for the 'ComfyUI' framework inspired by !After Detailer extension from auto1111, it allows you to detect faces using Mediapipe and YOLOv8n to create masks for the detected faces." + }, + { + "author": "asagi4", + "title": "ComfyUI prompt control", + "reference": "https://github.com/asagi4/comfyui-prompt-control", + "files": [ + "https://github.com/asagi4/comfyui-prompt-control" + ], + "install_type": "git-clone", + "description": "Nodes for convenient prompt editing. The aim is to make basic generations in ComfyUI completely prompt-controllable." + }, + { + "author": "asagi4", + "title": "ComfyUI-CADS", + "reference": "https://github.com/asagi4/ComfyUI-CADS", + "files": [ + "https://github.com/asagi4/ComfyUI-CADS" + ], + "install_type": "git-clone", + "description": "Attempts to implement [a/CADS](https://arxiv.org/abs/2310.17347) for ComfyUI. Credit also to the [a/A1111 implementation](https://github.com/v0xie/sd-webui-cads/tree/main) that I used as a reference." + }, + { + "author": "asagi4", + "title": "asagi4/comfyui-utility-nodes", + "reference": "https://github.com/asagi4/comfyui-utility-nodes", + "files": [ + "https://github.com/asagi4/comfyui-utility-nodes" + ], + "install_type": "git-clone", + "description": "Nodes:MUJinjaRender, MUSimpleWildcard" + }, + { + "author": "jamesWalker55", + "title": "ComfyUI - P2LDGAN Node", + "reference": "https://github.com/jamesWalker55/comfyui-p2ldgan", + "files": [ + "https://github.com/jamesWalker55/comfyui-p2ldgan" + ], + "install_type": "git-clone", + "description": "Nodes: P2LDGAN. This integrates P2LDGAN into ComfyUI. P2LDGAN extracts lineart from input images.\n[w/To use this extension, you need to download the [a/p2ldgan model](https://drive.google.com/file/d/1To4V_Btc3QhCLBWZ0PdSNgC1cbm3isHP) and save it in the %%ComfyUI/custom_nodes/comfyui-p2ldgan/checkpoints%% directory.]" + }, + { + "author": "jamesWalker55", + "title": "Various ComfyUI Nodes by Type", + "reference": "https://github.com/jamesWalker55/comfyui-various", + "files": [ + "https://github.com/jamesWalker55/comfyui-various" + ], + "nodename_pattern": "^JW", + "install_type": "git-clone", + "description": "Nodes: JWInteger, JWFloat, JWString, JWImageLoadRGB, JWImageResize, ..." + }, + { + "author": "adieyal", + "title": "DynamicPrompts Custom Nodes", + "reference": "https://github.com/adieyal/comfyui-dynamicprompts", + "files": [ + "https://github.com/adieyal/comfyui-dynamicprompts" + ], + "install_type": "git-clone", + "description": "Nodes: Random Prompts, Combinatorial Prompts, I'm Feeling Lucky, Magic Prompt, Jinja2 Templates. ComfyUI-DynamicPrompts is a custom nodes library that integrates into your existing ComfyUI Library. It provides nodes that enable the use of Dynamic Prompts in your ComfyUI." + }, + { + "author": "mihaiiancu", + "title": "mihaiiancu/Inpaint", + "reference": "https://github.com/mihaiiancu/ComfyUI_Inpaint", + "files": [ + "https://github.com/mihaiiancu/ComfyUI_Inpaint" + ], + "install_type": "git-clone", + "description": "Nodes: InpaintMediapipe. This node provides a simple interface to inpaint." + }, + { + "author": "kwaroran", + "title": "abg-comfyui", + "reference": "https://github.com/kwaroran/abg-comfyui", + "files": [ + "https://github.com/kwaroran/abg-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes: Remove Image Background (abg). A Anime Background Remover node for comfyui, based on this hf space, works same as AGB extention in automatic1111." + }, + { + "author": "bash-j", + "title": "Mikey Nodes", + "reference": "https://github.com/bash-j/mikey_nodes", + "files": [ + "https://github.com/bash-j/mikey_nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Prompt With Style, Prompt With SDXL, Resize Image for SDXL, Save Image With Prompt Data, HaldCLUT, Empty Latent Ratio Select/Custom SDXL" + }, + { + "author": "failfa.st", + "title": "failfast-comfyui-extensions", + "reference": "https://github.com/failfa-st/failfast-comfyui-extensions", + "files": [ + "https://github.com/failfa-st/failfast-comfyui-extensions" + ], + "install_type": "git-clone", + "description": "node color customization, custom colors, dot reroutes, link rendering options, straight lines, group freezing, node pinning, automated arrangement of nodes, copy image" + }, + { + "author": "Pfaeff", + "title": "pfaeff-comfyui", + "reference": "https://github.com/Pfaeff/pfaeff-comfyui", + "files": [ + "https://github.com/Pfaeff/pfaeff-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes: AstropulsePixelDetector, BackgroundRemover, ImagePadForBetterOutpaint, InpaintingPipelineLoader, Inpainting, ..." + }, + { + "author": "wallish77", + "title": "wlsh_nodes", + "reference": "https://github.com/wallish77/wlsh_nodes", + "files": [ + "https://github.com/wallish77/wlsh_nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Checkpoint Loader with Name, Save Prompt Info, Outpaint to Image, CLIP Positive-Negative, SDXL Quick Empty Latent, Empty Latent by Ratio, Time String, SDXL Steps, SDXL Resolutions ..." + }, + { + "author": "Kosinkadink", + "title": "ComfyUI-Advanced-ControlNet", + "reference": "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet", + "files": [ + "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet" + ], + "install_type": "git-clone", + "description": "Nodes: ControlNetLoaderAdvanced, DiffControlNetLoaderAdvanced, ScaledSoftControlNetWeights, SoftControlNetWeights, CustomControlNetWeights, SoftT2IAdapterWeights, CustomT2IAdapterWeights" + }, + { + "author": "Kosinkadink", + "title": "AnimateDiff Evolved", + "reference": "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved", + "files": [ + "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved" + ], + "install_type": "git-clone", + "description": "A forked repository that actively maintains [a/AnimateDiff](https://github.com/ArtVentureX/comfyui-animatediff), created by ArtVentureX.\n\nImproved AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff.\n[w/Download one or more motion models from [a/Original Models](https://huggingface.co/guoyww/animatediff/tree/main) | [a/Finetuned Models](https://huggingface.co/manshoety/AD_Stabilized_Motion/tree/main). See README for additional model links and usage. Put the model weights under %%ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models%%. You are free to rename the models, but keeping original names will ease use when sharing your workflow.]" + }, + { + "author": "Kosinkadink", + "title": "ComfyUI-VideoHelperSuite", + "reference": "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite", + "files": [ + "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite" + ], + "install_type": "git-clone", + "description": "Nodes: VHS_VideoCombine. Nodes related to video workflows" + }, + { + "author": "Gourieff", + "title": "ReActor Node for ComfyUI", + "reference": "https://github.com/Gourieff/comfyui-reactor-node", + "files": [ + "https://github.com/Gourieff/comfyui-reactor-node" + ], + "install_type": "git-clone", + "description": "The Fast and Simple 'roop-like' Face Swap Extension Node for ComfyUI, based on ReActor (ex Roop-GE) SD-WebUI Face Swap Extension" + }, + { + "author": "imb101", + "title": "FaceSwap", + "reference": "https://github.com/imb101/ComfyUI-FaceSwap", + "files": [ + "https://github.com/imb101/ComfyUI-FaceSwap" + ], + "install_type": "git-clone", + "description": "Nodes:FaceSwapNode. Very basic custom node to enable face swapping in ComfyUI. (roop)" + }, + { + "author": "Chaoses-Ib", + "title": "ComfyUI_Ib_CustomNodes", + "reference": "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes", + "files": [ + "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes" + ], + "install_type": "git-clone", + "description": "Nodes: LoadImageFromPath. Load Image From Path loads the image from the source path and does not have such problems." + }, + { + "author": "AIrjen", + "title": "One Button Prompt", + "reference": "https://github.com/AIrjen/OneButtonPrompt", + "files": [ + "https://github.com/AIrjen/OneButtonPrompt" + ], + "install_type": "git-clone", + "description": "One Button Prompt has a prompt generation node for beginners who have problems writing a good prompt, or advanced users who want to get inspired. It generates an entire prompt from scratch. It is random, but controlled. You simply load up the script and press generate, and let it surprise you." + }, + { + "author": "coreyryanhanson", + "title": "ComfyQR", + "reference": "https://github.com/coreyryanhanson/ComfyQR", + "files": [ + "https://github.com/coreyryanhanson/ComfyQR" + ], + "install_type": "git-clone", + "description": "QR generation within ComfyUI. Contains nodes suitable for workflows from generating basic QR images to techniques with advanced QR masking." + }, + { + "author": "coreyryanhanson", + "title": "ComfyQR-scanning-nodes", + "reference": "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes", + "files": [ + "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes" + ], + "install_type": "git-clone", + "description": "A set of ComfyUI nodes to quickly test generated QR codes for scannability. A companion project to ComfyQR." + }, + { + "author": "dimtoneff", + "title": "ComfyUI PixelArt Detector", + "reference": "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector", + "files": [ + "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector" + ], + "install_type": "git-clone", + "description": "This node manipulates the pixel art image in ways that it should look pixel perfect (downscales, changes palette, upscales etc.)." + }, + { + "author": "dimtoneff", + "title": "Eagle PNGInfo", + "reference": "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo", + "files": [ + "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo" + ], + "install_type": "git-clone", + "description": "Nodes: EagleImageNode" + }, + { + "author": "theUpsider", + "title": "Styles CSV Loader Extension for ComfyUI", + "reference": "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader", + "files": [ + "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader" + ], + "install_type": "git-clone", + "description": "This extension allows users to load styles from a CSV file, primarily for migration purposes from the automatic1111 Stable Diffusion web UI." + }, + { + "author": "M1kep", + "title": "Comfy_KepListStuff", + "reference": "https://github.com/M1kep/Comfy_KepListStuff", + "files": [ + "https://github.com/M1kep/Comfy_KepListStuff" + ], + "install_type": "git-clone", + "description": "Nodes: Range(Step), Range(Num Steps), List Length, Image Overlay, Stack Images, Empty Images, Join Image Lists, Join Float Lists. This extension provides various list manipulation nodes" + }, + { + "author": "M1kep", + "title": "ComfyLiterals", + "reference": "https://github.com/M1kep/ComfyLiterals", + "files": [ + "https://github.com/M1kep/ComfyLiterals" + ], + "install_type": "git-clone", + "description": "Nodes: Int, Float, String, Operation, Checkpoint" + }, + { + "author": "M1kep", + "title": "KepPromptLang", + "reference": "https://github.com/M1kep/KepPromptLang", + "files": [ + "https://github.com/M1kep/KepPromptLang" + ], + "install_type": "git-clone", + "description": "Nodes: Build Gif, Special CLIP Loader. It offers various manipulation capabilities for the internal operations of the prompt." + }, + { + "author": "M1kep", + "title": "Comfy_KepMatteAnything", + "reference": "https://github.com/M1kep/Comfy_KepMatteAnything", + "files": [ + "https://github.com/M1kep/Comfy_KepMatteAnything" + ], + "install_type": "git-clone", + "description": "This extension provides a custom node that allows the use of [a/Matte Anything](https://github.com/hustvl/Matte-Anything) in ComfyUI." + }, + { + "author": "M1kep", + "title": "Comfy_KepKitchenSink", + "reference": "https://github.com/M1kep/Comfy_KepKitchenSink", + "files": [ + "https://github.com/M1kep/Comfy_KepKitchenSink" + ], + "install_type": "git-clone", + "description": "Nodes: KepRotateImage" + }, + { + "author": "M1kep", + "title": "ComfyUI-OtherVAEs", + "reference": "https://github.com/M1kep/ComfyUI-OtherVAEs", + "files": [ + "https://github.com/M1kep/ComfyUI-OtherVAEs" + ], + "install_type": "git-clone", + "description": "Nodes: TAESD VAE Decode" + }, + { + "author": "M1kep", + "title": "ComfyUI-KepOpenAI", + "reference": "https://github.com/M1kep/ComfyUI-KepOpenAI", + "files": [ + "https://github.com/M1kep/ComfyUI-KepOpenAI" + ], + "install_type": "git-clone", + "description": "ComfyUI-KepOpenAI is a user-friendly node that serves as an interface to the GPT-4 with Vision (GPT-4V) API. This integration facilitates the processing of images coupled with text prompts, leveraging the capabilities of the OpenAI API to generate text completions that are contextually relevant to the provided inputs." + }, + { + "author": "uarefans", + "title": "ComfyUI-Fans", + "reference": "https://github.com/uarefans/ComfyUI-Fans", + "files": [ + "https://github.com/uarefans/ComfyUI-Fans" + ], + "install_type": "git-clone", + "description": "Nodes: Fans Styler (Max 10 Style), Fans Text Concat (Until 10 text)." + }, + { + "author": "NicholasMcCarthy", + "title": "ComfyUI_TravelSuite", + "reference": "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite", + "files": [ + "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite" + ], + "install_type": "git-clone", + "description": "ComfyUI custom nodes to apply various latent travel techniques." + }, + { + "author": "ManglerFTW", + "title": "ComfyI2I", + "reference": "https://github.com/ManglerFTW/ComfyI2I", + "files": [ + "https://github.com/ManglerFTW/ComfyI2I" + ], + "install_type": "git-clone", + "description": "A set of custom nodes to perform image 2 image functions in ComfyUI." + }, + { + "author": "theUpsider", + "title": "ComfyUI-Logic", + "reference": "https://github.com/theUpsider/ComfyUI-Logic", + "files": [ + "https://github.com/theUpsider/ComfyUI-Logic" + ], + "install_type": "git-clone", + "description": "An extension to ComfyUI that introduces logic nodes and conditional rendering capabilities." + }, + { + "author": "mpiquero7164", + "title": "SaveImgPrompt", + "reference": "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt", + "files": [ + "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt" + ], + "install_type": "git-clone", + "description": "Save a png or jpeg and option to save prompt/workflow in a text or json file for each image in Comfy + Workflow loading." + }, + { + "author": "m-sokes", + "title": "ComfyUI Sokes Nodes", + "reference": "https://github.com/m-sokes/ComfyUI-Sokes-Nodes", + "files": [ + "https://github.com/m-sokes/ComfyUI-Sokes-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Empty Latent Randomizer (9 Inputs)" + }, + { + "author": "Extraltodeus", + "title": "noise latent perlinpinpin", + "reference": "https://github.com/Extraltodeus/noise_latent_perlinpinpin", + "files": [ + "https://github.com/Extraltodeus/noise_latent_perlinpinpin" + ], + "install_type": "git-clone", + "description": "Nodes: NoisyLatentPerlin. This allows to create latent spaces filled with perlin-based noise that can actually be used by the samplers." + }, + { + "author": "Extraltodeus", + "title": "LoadLoraWithTags", + "reference": "https://github.com/Extraltodeus/LoadLoraWithTags", + "files": [ + "https://github.com/Extraltodeus/LoadLoraWithTags" + ], + "install_type": "git-clone", + "description": "Nodes:LoadLoraWithTags. Save/Load trigger words for loras from a json and auto fetch them on civitai if they are missing." + }, + { + "author": "Extraltodeus", + "title": "sigmas_tools_and_the_golden_scheduler", + "reference": "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler", + "files": [ + "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler" + ], + "install_type": "git-clone", + "description": "A few nodes to mix sigmas and a custom scheduler that uses phi, then one using eval() to be able to schedule with custom formulas." + }, + { + "author": "Extraltodeus", + "title": "ComfyUI-AutomaticCFG", + "reference": "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG", + "files": [ + "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG" + ], + "install_type": "git-clone", + "description": "My own version 'from scratch' of a self-rescaling CFG. It isn't much but it's honest work.\nTLDR: set your CFG at 8 to try it. No burned images and artifacts anymore. CFG is also a bit more sensitive because it's a proportion around 8. Low scale like 4 also gives really nice results since your CFG is not the CFG anymore. Also in general even with relatively low settings it seems to improve the quality." + }, + { + "author": "JPS", + "title": "JPS Custom Nodes for ComfyUI", + "reference": "https://github.com/JPS-GER/ComfyUI_JPS-Nodes", + "files": [ + "https://github.com/JPS-GER/ComfyUI_JPS-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Various nodes to handle SDXL Resolutions, SDXL Basic Settings, IP Adapter Settings, Revision Settings, SDXL Prompt Styler, Crop Image to Square, Crop Image to Target Size, Get Date-Time String, Resolution Multiply, Largest Integer, 5-to-1 Switches for Integer, Images, Latents, Conditioning, Model, VAE, ControlNet" + }, + { + "author": "hustille", + "title": "hus' utils for ComfyUI", + "reference": "https://github.com/hustille/ComfyUI_hus_utils", + "files": [ + "https://github.com/hustille/ComfyUI_hus_utils" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes primarily for seed and filename generation" + }, + { + "author": "hustille", + "title": "ComfyUI_Fooocus_KSampler", + "reference": "https://github.com/hustille/ComfyUI_Fooocus_KSampler", + "files": [ + "https://github.com/hustille/ComfyUI_Fooocus_KSampler" + ], + "install_type": "git-clone", + "description": "Nodes: KSampler With Refiner (Fooocus). The KSampler from [a/Fooocus](https://github.com/lllyasviel/Fooocus) as a ComfyUI node [w/NOTE: This patches basic ComfyUI behaviour - don't use together with other samplers. Or perhaps do? Other samplers might profit from those changes ... ymmv.]" + }, + { + "author": "badjeff", + "title": "LoRA Tag Loader for ComfyUI", + "reference": "https://github.com/badjeff/comfyui_lora_tag_loader", + "files": [ + "https://github.com/badjeff/comfyui_lora_tag_loader" + ], + "install_type": "git-clone", + "description": "A ComfyUI custom node to read LoRA tag(s) from text and load it into checkpoint model." + }, + { + "author": "rgthree", + "title": "rgthree's ComfyUI Nodes", + "reference": "https://github.com/rgthree/rgthree-comfy", + "files": [ + "https://github.com/rgthree/rgthree-comfy" + ], + "nodename_pattern": " \\(rgthree\\)$", + "install_type": "git-clone", + "description": "Nodes: Seed, Reroute, Context, Lora Loader Stack, Context Switch, Fast Muter. These custom nodes helps organize the building of complex workflows." + }, + { + "author": "AIGODLIKE", + "title": "AIGODLIKE-COMFYUI-TRANSLATION", + "reference": "https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION", + "files": [ + "https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION" + ], + "install_type": "git-clone", + "description": "It provides language settings. (Contribution from users of various languages is needed due to the support for each language.)" + }, + { + "author": "AIGODLIKE", + "title": "AIGODLIKE-ComfyUI-Studio", + "reference": "https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Studio", + "files": [ + "https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Studio" + ], + "install_type": "git-clone", + "description": "Improve the interactive experience of using ComfyUI, such as making the loading of ComfyUI models more intuitive and making it easier to create model thumbnails" + }, + { + "author": "syllebra", + "title": "BilboX's ComfyUI Custom Nodes", + "reference": "https://github.com/syllebra/bilbox-comfyui", + "files": [ + "https://github.com/syllebra/bilbox-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes: BilboX's PromptGeek Photo Prompt. This provides a convenient way to compose photorealistic prompts into ComfyUI." + }, + { + "author": "Girish Gopaul", + "title": "Save Image with Generation Metadata", + "reference": "https://github.com/giriss/comfy-image-saver", + "files": [ + "https://github.com/giriss/comfy-image-saver" + ], + "install_type": "git-clone", + "description": "All the tools you need to save images with their generation metadata on ComfyUI. Compatible with Civitai & Prompthero geninfo auto-detection. Works with png, jpeg and webp." + }, + { + "author": "shingo1228", + "title": "ComfyUI-send-Eagle(slim)", + "reference": "https://github.com/shingo1228/ComfyUI-send-eagle-slim", + "files": [ + "https://github.com/shingo1228/ComfyUI-send-eagle-slim" + ], + "install_type": "git-clone", + "description": "Nodes:Send Webp Image to Eagle. This is an extension node for ComfyUI that allows you to send generated images in webp format to Eagle. This extension node is a re-implementation of the Eagle linkage functions of the previous ComfyUI-send-Eagle node, focusing on the functions required for this node." + }, + { + "author": "shingo1228", + "title": "ComfyUI-SDXL-EmptyLatentImage", + "reference": "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage", + "files": [ + "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage" + ], + "install_type": "git-clone", + "description": "Nodes:SDXL Empty Latent Image. An extension node for ComfyUI that allows you to select a resolution from the pre-defined json files and output a Latent Image." + }, + { + "author": "laksjdjf", + "title": "pfg-ComfyUI", + "reference": "https://github.com/laksjdjf/pfg-ComfyUI", + "files": [ + "https://github.com/laksjdjf/pfg-ComfyUI" + ], + "install_type": "git-clone", + "description": "ComfyUI version of https://github.com/laksjdjf/pfg-webui. (To use this extension, you need to download the required model file from **Install Models**)" + }, + { + "author": "laksjdjf", + "title": "attention-couple-ComfyUI", + "reference": "https://github.com/laksjdjf/attention-couple-ComfyUI", + "files": [ + "https://github.com/laksjdjf/attention-couple-ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes:Attention couple. This is a custom node that manipulates region-specific prompts. While vanilla ComfyUI employs an area specification method based on latent couples, this node divides regions using attention layers within UNet." + }, + { + "author": "laksjdjf", + "title": "cd-tuner_negpip-ComfyUI", + "reference": "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI", + "files": [ + "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes:Apply CDTuner, Apply Negapip. This extension provides the [a/CD(Color/Detail) Tuner](https://github.com/hako-mikan/sd-webui-cd-tuner) and the [a/Negative Prompt in the Prompt](https://github.com/hako-mikan/sd-webui-negpip) features." + }, + { + "author": "laksjdjf", + "title": "LoRA-Merger-ComfyUI", + "reference": "https://github.com/laksjdjf/LoRA-Merger-ComfyUI", + "files": [ + "https://github.com/laksjdjf/LoRA-Merger-ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes:Load LoRA Weight Only, Load LoRA from Weight, Merge LoRA, Save LoRA. This extension provides nodes for merging LoRA." + }, + { + "author": "laksjdjf", + "title": "LCMSampler-ComfyUI", + "reference": "https://github.com/laksjdjf/LCMSampler-ComfyUI", + "files": [ + "https://github.com/laksjdjf/LCMSampler-ComfyUI" + ], + "install_type": "git-clone", + "description": "This extension node is intended for the use of LCM conversion for SSD-1B-anime. It does not guarantee operation with the original LCM (as it cannot load weights in the current version). To take advantage of fast generation with LCM, a node for using TAESD as a decoder is also provided. This is inspired by ComfyUI-OtherVAEs." + }, + { + "author": "alsritter", + "title": "asymmetric-tiling-comfyui", + "reference": "https://github.com/alsritter/asymmetric-tiling-comfyui", + "files": [ + "https://github.com/alsritter/asymmetric-tiling-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes:Asymmetric_Tiling_KSampler. " + }, + { + "author": "meap158", + "title": "GPU temperature protection", + "reference": "https://github.com/meap158/ComfyUI-GPU-temperature-protection", + "files": [ + "https://github.com/meap158/ComfyUI-GPU-temperature-protection" + ], + "install_type": "git-clone", + "description": "Pause image generation when GPU temperature exceeds threshold." + }, + { + "author": "meap158", + "title": "ComfyUI-Prompt-Expansion", + "reference": "https://github.com/meap158/ComfyUI-Prompt-Expansion", + "files": [ + "https://github.com/meap158/ComfyUI-Prompt-Expansion" + ], + "install_type": "git-clone", + "description": "Dynamic prompt expansion, powered by GPT-2 locally on your device." + }, + { + "author": "meap158", + "title": "ComfyUI-Background-Replacement", + "reference": "https://github.com/meap158/ComfyUI-Background-Replacement", + "files": [ + "https://github.com/meap158/ComfyUI-Background-Replacement" + ], + "install_type": "git-clone", + "description": "Instantly replace your image's background." + }, + { + "author": "TeaCrab", + "title": "ComfyUI-TeaNodes", + "reference": "https://github.com/TeaCrab/ComfyUI-TeaNodes", + "files": [ + "https://github.com/TeaCrab/ComfyUI-TeaNodes" + ], + "install_type": "git-clone", + "description": "Nodes:TC_EqualizeCLAHE, TC_SizeApproximation, TC_ImageResize, TC_ImageScale, TC_ColorFill." + }, + { + "author": "nagolinc", + "title": "ComfyUI_FastVAEDecorder_SDXL", + "reference": "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL", + "files": [ + "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL" + ], + "install_type": "git-clone", + "description": "Based off of: [a/Birch-san/diffusers-play/approx_vae](https://github.com/Birch-san/diffusers-play/tree/main/approx_vae). This ComfyUI node allows you to quickly preview SDXL 1.0 latents." + }, + { + "author": "bradsec", + "title": "ResolutionSelector for ComfyUI", + "reference": "https://github.com/bradsec/ComfyUI_ResolutionSelector", + "files": [ + "https://github.com/bradsec/ComfyUI_ResolutionSelector" + ], + "install_type": "git-clone", + "description": "Nodes:ResolutionSelector" + }, + { + "author": "kohya-ss", + "title": "ControlNet-LLLite-ComfyUI", + "reference": "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI", + "files": [ + "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes: LLLiteLoader" + }, + { + "author": "jjkramhoeft", + "title": "ComfyUI-Jjk-Nodes", + "reference": "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes", + "files": [ + "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes: SDXLRecommendedImageSize, JjkText, JjkShowText, JjkConcat. A set of custom nodes for ComfyUI - focused on text and parameter utility" + }, + { + "author": "dagthomas", + "title": "SDXL Auto Prompter", + "reference": "https://github.com/dagthomas/comfyui_dagthomas", + "files": [ + "https://github.com/dagthomas/comfyui_dagthomas" + ], + "install_type": "git-clone", + "description": "Easy prompting for generation of endless random art pieces and photographs!" + }, + { + "author": "marhensa", + "title": "Recommended Resolution Calculator", + "reference": "https://github.com/marhensa/sdxl-recommended-res-calc", + "files": [ + "https://github.com/marhensa/sdxl-recommended-res-calc" + ], + "install_type": "git-clone", + "description": "Input your desired output final resolution, it will automaticaly set the initial recommended SDXL ratio/size and its Upscale Factor to reach that output final resolution, also there's an option for 2x/4x reverse Upscale Factor. These all to avoid using bad/arbitary initial ratio/resolution." + }, + { + "author": "Nuked", + "title": "ComfyUI-N-Nodes", + "reference": "https://github.com/Nuked88/ComfyUI-N-Nodes", + "files": [ + "https://github.com/Nuked88/ComfyUI-N-Nodes" + ], + "install_type": "git-clone", + "description": "A suite of custom nodes for ConfyUI that includes GPT text-prompt generation, LoadVideo,SaveVideo,LoadFramesFromFolder and FrameInterpolator" + }, + { + "author": "richinsley", + "title": "Comfy-LFO", + "reference": "https://github.com/richinsley/Comfy-LFO", + "files": [ + "https://github.com/richinsley/Comfy-LFO" + ], + "install_type": "git-clone", + "description": "Nodes:LFO_Triangle, LFO_Sine, SawtoothNode, SquareNode, PulseNode. ComfyUI custom nodes to create Low Frequency Oscillators." + }, + { + "author": "Beinsezii", + "title": "bsz-cui-extras", + "reference": "https://github.com/Beinsezii/bsz-cui-extras", + "files": [ + "https://github.com/Beinsezii/bsz-cui-extras" + ], + "install_type": "git-clone", + "description": "This contains all-in-one 'principled' nodes for T2I, I2I, refining, and scaling. Additionally it has many tools for directly manipulating the color of latents, high res fix math, and scripted image post-processing." + }, + { + "author": "youyegit", + "title": "tdxh_node_comfyui", + "reference": "https://github.com/youyegit/tdxh_node_comfyui", + "files": [ + "https://github.com/youyegit/tdxh_node_comfyui" + ], + "install_type": "git-clone", + "description": "Nodes:TdxhImageToSize, TdxhImageToSizeAdvanced, TdxhLoraLoader, TdxhIntInput, TdxhFloatInput, TdxhStringInput. Some nodes for stable diffusion comfyui. Sometimes it helps conveniently to use less nodes for doing the same things." + }, + { + "author": "Sxela", + "title": "ComfyWarp", + "reference": "https://github.com/Sxela/ComfyWarp", + "files": [ + "https://github.com/Sxela/ComfyWarp" + ], + "install_type": "git-clone", + "description": "Nodes:LoadFrameSequence, LoadFrame" + }, + { + "author": "skfoo", + "title": "ComfyUI-Coziness", + "reference": "https://github.com/skfoo/ComfyUI-Coziness", + "files": [ + "https://github.com/skfoo/ComfyUI-Coziness" + ], + "install_type": "git-clone", + "description": "Nodes:MultiLora Loader, Lora Text Extractor. Provides a node for assisting in loading loras through text." + }, + { + "author": "YOUR-WORST-TACO", + "title": "ComfyUI-TacoNodes", + "reference": "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes", + "files": [ + "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes" + ], + "install_type": "git-clone", + "description": "Nodes:TacoLatent, TacoAnimatedLoader, TacoImg2ImgAnimatedLoader, TacoGifMaker." + }, + { + "author": "Lerc", + "title": "Canvas Tab", + "reference": "https://github.com/Lerc/canvas_tab", + "files": [ + "https://github.com/Lerc/canvas_tab" + ], + "install_type": "git-clone", + "description": "This extension provides a full page image editor with mask support. There are two nodes, one to receive images from the editor and one to send images to the editor." + }, + { + "author": "Ttl", + "title": "ComfyUI Neural network latent upscale custom node", + "reference": "https://github.com/Ttl/ComfyUi_NNLatentUpscale", + "files": [ + "https://github.com/Ttl/ComfyUi_NNLatentUpscale" + ], + "install_type": "git-clone", + "description": "A custom ComfyUI node designed for rapid latent upscaling using a compact neural network, eliminating the need for VAE-based decoding and encoding." + }, + { + "author": "spro", + "title": "Latent Mirror node for ComfyUI", + "reference": "https://github.com/spro/comfyui-mirror", + "files": [ + "https://github.com/spro/comfyui-mirror" + ], + "install_type": "git-clone", + "description": "Nodes: Latent Mirror. Node to mirror a latent along the Y (vertical / left to right) or X (horizontal / top to bottom) axis." + }, + { + "author": "Tropfchen", + "title": "Embedding Picker", + "reference": "https://github.com/Tropfchen/ComfyUI-Embedding_Picker", + "files": [ + "https://github.com/Tropfchen/ComfyUI-Embedding_Picker" + ], + "install_type": "git-clone", + "description": "Tired of forgetting and misspelling often weird names of embeddings you use? Or perhaps you use only one, cause you forgot you have tens of them installed?" + }, + { + "author": "Acly", + "title": "ComfyUI Nodes for External Tooling", + "reference": "https://github.com/Acly/comfyui-tooling-nodes", + "files": [ + "https://github.com/Acly/comfyui-tooling-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Load Image (Base64), Load Mask (Base64), Send Image (WebSocket), Crop Image, Apply Mask to Image. Provides nodes geared towards using ComfyUI as a backend for external tools.\nNOTE: This extension is necessary when using an external tool like [comfyui-capture-inference](https://github.com/minux302/comfyui-capture-inference)." + }, + { + "author": "Acly", + "title": "ComfyUI Inpaint Nodes", + "reference": "https://github.com/Acly/comfyui-inpaint-nodes", + "files": [ + "https://github.com/Acly/comfyui-inpaint-nodes" + ], + "install_type": "git-clone", + "description": "Experimental nodes for better inpainting with ComfyUI. Adds two nodes which allow using [a/Fooocus](https://github.com/Acly/comfyui-inpaint-nodes) inpaint model. It's a small and flexible patch which can be applied to any SDXL checkpoint and will transform it into an inpaint model. This model can then be used like other inpaint models, and provides the same benefits. [a/Read more](https://github.com/lllyasviel/Fooocus/discussions/414)" + }, + { + "author": "picturesonpictures", + "title": "comfy_PoP", + "reference": "https://github.com/picturesonpictures/comfy_PoP", + "files": ["https://github.com/picturesonpictures/comfy_PoP"], + "install_type": "git-clone", + "description": "A collection of custom nodes for ComfyUI. Includes a quick canny edge detection node with unconventional settings, simple LoRA stack nodes for workflow efficiency, and a customizable aspect ratio node." + }, + { + "author": "Dream Project", + "title": "Dream Project Animation Nodes", + "reference": "https://github.com/alt-key-project/comfyui-dream-project", + "files": [ + "https://github.com/alt-key-project/comfyui-dream-project" + ], + "install_type": "git-clone", + "description": "This extension offers various nodes that are useful for Deforum-like animations in ComfyUI." + }, + { + "author": "Dream Project", + "title": "Dream Video Batches", + "reference": "https://github.com/alt-key-project/comfyui-dream-video-batches", + "files": [ + "https://github.com/alt-key-project/comfyui-dream-video-batches" + ], + "install_type": "git-clone", + "description": "Provide utilities for batch based video generation workflows (s.a. AnimateDiff and Stable Video Diffusion)." + }, + { + "author": "seanlynch", + "title": "ComfyUI Optical Flow", + "reference": "https://github.com/seanlynch/comfyui-optical-flow", + "files": [ + "https://github.com/seanlynch/comfyui-optical-flow" + ], + "install_type": "git-clone", + "description": "This package contains three nodes to help you compute optical flow between pairs of images, usually adjacent frames in a video, visualize the flow, and apply the flow to another image of the same dimensions. Most of the code is from Deforum, so this is released under the same license (MIT)." + }, + { + "author": "ealkanat", + "title": "ComfyUI Easy Padding", + "reference": "https://github.com/ealkanat/comfyui_easy_padding", + "files": [ + "https://github.com/ealkanat/comfyui_easy_padding" + ], + "install_type": "git-clone", + "description": "ComfyUI Easy Padding is a simple custom ComfyUI node that helps you to add padding to images on ComfyUI." + }, + { + "author": "ArtBot2023", + "title": "Character Face Swap", + "reference": "https://github.com/ArtBot2023/CharacterFaceSwap", + "files": [ + "https://github.com/ArtBot2023/CharacterFaceSwap" + ], + "install_type": "git-clone", + "description": "Character face swap with LoRA and embeddings." + }, + { + "author": "mav-rik", + "title": "Facerestore CF (Code Former)", + "reference": "https://github.com/mav-rik/facerestore_cf", + "files": [ + "https://github.com/mav-rik/facerestore_cf" + ], + "install_type": "git-clone", + "description": "This is a copy of [a/facerestore custom node](https://civitai.com/models/24690/comfyui-facerestore-node) with a bit of a change to support CodeFormer Fidelity parameter. These ComfyUI nodes can be used to restore faces in images similar to the face restore option in AUTOMATIC1111 webui.\nNOTE: To use this node, you need to download the face restoration model and face detection model from the 'Install models' menu." + }, + { + "author": "braintacles", + "title": "braintacles-nodes", + "reference": "https://github.com/braintacles/braintacles-comfyui-nodes", + "files": [ + "https://github.com/braintacles/braintacles-comfyui-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: CLIPTextEncodeSDXL-Multi-IO, CLIPTextEncodeSDXL-Pipe, Empty Latent Image from Aspect-Ratio, Random Find and Replace." + }, + { + "author": "hayden-fr", + "title": "ComfyUI-Model-Manager", + "reference": "https://github.com/hayden-fr/ComfyUI-Model-Manager", + "files": [ + "https://github.com/hayden-fr/ComfyUI-Model-Manager" + ], + "install_type": "git-clone", + "description": "Manage models: browsing, download and delete." + }, + { + "author": "hayden-fr", + "title": "ComfyUI-Image-Browsing", + "reference": "https://github.com/hayden-fr/ComfyUI-Image-Browsing", + "files": [ + "https://github.com/hayden-fr/ComfyUI-Image-Browsing" + ], + "install_type": "git-clone", + "description": "Image Browsing: browsing, download and delete." + }, + { + "author": "ali1234", + "title": "comfyui-job-iterator", + "reference": "https://github.com/ali1234/comfyui-job-iterator", + "files": [ + "https://github.com/ali1234/comfyui-job-iterator" + ], + "install_type": "git-clone", + "description": "Implements iteration over sequences within a single workflow run. [w/NOTE: This node replaces the execution of ComfyUI for iterative processing functionality.]" + }, + { + "author": "jmkl", + "title": "ComfyUI Ricing", + "reference": "https://github.com/jmkl/ComfyUI-ricing", + "files": [ + "https://github.com/jmkl/ComfyUI-ricing" + ], + "install_type": "git-clone", + "description": "ComfyUI custom user.css and some script stuff. mainly for web interface." + }, + { + "author": "budihartono", + "title": "Otonx's Custom Nodes", + "reference": "https://github.com/budihartono/comfyui_otonx_nodes", + "files": [ + "https://github.com/budihartono/comfyui_otonx_nodes" + ], + "install_type": "git-clone", + "description": "Nodes: OTX Multiple Values, OTX KSampler Feeder. This extension provides custom nodes for ComfyUI created for personal projects. Made available for reference. Nodes may be updated or changed intermittently or not at all. Review & test before use." + }, + { + "author": "ramyma", + "title": "A8R8 ComfyUI Nodes", + "reference": "https://github.com/ramyma/A8R8_ComfyUI_nodes", + "files": [ + "https://github.com/ramyma/A8R8_ComfyUI_nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Base64Image Input Node, Base64Image Output Node. [a/A8R8](https://github.com/ramyma/a8r8) supporting nodes to integrate with ComfyUI" + }, + { + "author": "spinagon", + "title": "Seamless tiling Node for ComfyUI", + "reference": "https://github.com/spinagon/ComfyUI-seamless-tiling", + "files": [ + "https://github.com/spinagon/ComfyUI-seamless-tiling" + ], + "install_type": "git-clone", + "description": "Node for generating almost seamless textures, based on similar setting from A1111." + }, + { + "author": "BiffMunky", + "title": "Endless ️🌊✨ Nodes", + "reference": "https://github.com/tusharbhutt/Endless-Nodes", + "files": [ + "https://github.com/tusharbhutt/Endless-Nodes" + ], + "install_type": "git-clone", + "description": "A small set of nodes I created for various numerical and text inputs. Features image saver with ability to have JSON saved to separate folder, parameter collection nodes, two aesthetic scoring models, switches for text and numbers, and conversion of string to numeric and vice versa." + }, + { + "author": "spacepxl", + "title": "ComfyUI-HQ-Image-Save", + "reference": "https://github.com/spacepxl/ComfyUI-HQ-Image-Save", + "files": [ + "https://github.com/spacepxl/ComfyUI-HQ-Image-Save" + ], + "install_type": "git-clone", + "description": "Add Image Save nodes for TIFF 16 bit and EXR 32 bit formats. Probably only useful if you're applying a LUT or other color corrections, and care about preserving as much color accuracy as possible." + }, + { + "author": "spacepxl", + "title": "ComfyUI-Image-Filters", + "reference": "https://github.com/spacepxl/ComfyUI-Image-Filters", + "files": [ + "https://github.com/spacepxl/ComfyUI-Image-Filters" + ], + "install_type": "git-clone", + "description": "Image and matte filtering nodes for ComfyUI `image/filters/*`" + }, + { + "author": "spacepxl", + "title": "ComfyUI-RAVE", + "reference": "https://github.com/spacepxl/ComfyUI-RAVE", + "files": [ + "https://github.com/spacepxl/ComfyUI-RAVE" + ], + "install_type": "git-clone", + "description": "Unofficial ComfyUI implementation of [a/RAVE](https://rave-video.github.io/)" + }, + { + "author": "PTA", + "title": "auto nodes layout", + "reference": "https://github.com/phineas-pta/comfyui-auto-nodes-layout", + "files": [ + "https://github.com/phineas-pta/comfyui-auto-nodes-layout" + ], + "install_type": "git-clone", + "description": "A ComfyUI extension to apply better nodes layout algorithm to ComfyUI workflow (mostly for visualization purpose)" + }, + { + "author": "receyuki", + "title": "comfyui-prompt-reader-node", + "reference": "https://github.com/receyuki/comfyui-prompt-reader-node", + "files": [ + "https://github.com/receyuki/comfyui-prompt-reader-node" + ], + "install_type": "git-clone", + "description": "ComfyUI node version of the SD Prompt Reader." + }, + { + "author": "rklaffehn", + "title": "rk-comfy-nodes", + "reference": "https://github.com/rklaffehn/rk-comfy-nodes", + "files": [ + "https://github.com/rklaffehn/rk-comfy-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: RK_CivitAIMetaChecker, RK_CivitAIAddHashes." + }, + { + "author": "cubiq", + "title": "ComfyUI Essentials", + "reference": "https://github.com/cubiq/ComfyUI_essentials", + "files": [ + "https://github.com/cubiq/ComfyUI_essentials" + ], + "install_type": "git-clone", + "description": "Essential nodes that are weirdly missing from ComfyUI core. With few exceptions they are new features and not commodities. I hope this will be just a temporary repository until the nodes get included into ComfyUI." + }, + { + "author": "Clybius", + "title": "ComfyUI-Latent-Modifiers", + "reference": "https://github.com/Clybius/ComfyUI-Latent-Modifiers", + "files": [ + "https://github.com/Clybius/ComfyUI-Latent-Modifiers" + ], + "install_type": "git-clone", + "description": "Nodes: Latent Diffusion Mega Modifier. ComfyUI nodes which modify the latent during the diffusion process. (Sharpness, Tonemap, Rescale, Extra Noise)" + }, + { + "author": "Clybius", + "title": "ComfyUI Extra Samplers", + "reference": "https://github.com/Clybius/ComfyUI-Extra-Samplers", + "files": [ + "https://github.com/Clybius/ComfyUI-Extra-Samplers" + ], + "install_type": "git-clone", + "description": "Nodes: SamplerCustomNoise, SamplerCustomNoiseDuo, SamplerCustomModelMixtureDuo, SamplerRES_Momentumized, SamplerDPMPP_DualSDE_Momentumized, SamplerCLYB_4M_SDE_Momentumized, SamplerTTM, SamplerLCMCustom\nThis extension provides various custom samplers not offered by the default nodes in ComfyUI." + }, + { + "author": "mcmonkeyprojects", + "title": "Stable Diffusion Dynamic Thresholding (CFG Scale Fix)", + "reference": "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding", + "files": [ + "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding" + ], + "install_type": "git-clone", + "description": "Extension for StableSwarmUI, ComfyUI, and AUTOMATIC1111 Stable Diffusion WebUI that enables a way to use higher CFG Scales without color issues. This works by clamping latents between steps." + }, + { + "author": "Tropfchen", + "title": "YARS: Yet Another Resolution Selector", + "reference": "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector", + "files": [ + "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector" + ], + "install_type": "git-clone", + "description": "A slightly different Resolution Selector node, allowing to freely change base resolution and aspect ratio, with options to maintain the pixel count or use the base resolution as the highest or lowest dimension." + }, + { + "author": "chrisgoringe", + "title": "Variation seeds", + "reference": "https://github.com/chrisgoringe/cg-noise", + "files": [ + "https://github.com/chrisgoringe/cg-noise" + ], + "install_type": "git-clone", + "description": "Adds KSampler custom nodes with variation seed and variation strength." + }, + { + "author": "chrisgoringe", + "title": "Image chooser", + "reference": "https://github.com/chrisgoringe/cg-image-picker", + "files": [ + "https://github.com/chrisgoringe/cg-image-picker" + ], + "install_type": "git-clone", + "description": "A custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow." + }, + { + "author": "chrisgoringe", + "title": "Use Everywhere (UE Nodes)", + "reference": "https://github.com/chrisgoringe/cg-use-everywhere", + "files": [ + "https://github.com/chrisgoringe/cg-use-everywhere" + ], + "install_type": "git-clone", + "nodename_pattern": "(^(Prompts|Anything) Everywhere|Simple String)", + "description": "A set of nodes that allow data to be 'broadcast' to some or all unconnected inputs. Greatly reduces link spaghetti." + }, + { + "author": "chrisgoringe", + "title": "Prompt Info", + "reference": "https://github.com/chrisgoringe/cg-prompt-info", + "files": [ + "https://github.com/chrisgoringe/cg-prompt-info" + ], + "install_type": "git-clone", + "description": "Prompt Info" + }, + { + "author": "TGu-97", + "title": "TGu Utilities", + "reference": "https://github.com/TGu-97/ComfyUI-TGu-utils", + "files": [ + "https://github.com/TGu-97/ComfyUI-TGu-utils" + ], + "install_type": "git-clone", + "description": "Nodes: MPN Switch, MPN Reroute, PN Switch. This is a set of custom nodes for ComfyUI. Mainly focus on control switches." + }, + { + "author": "seanlynch", + "title": "SRL's nodes", + "reference": "https://github.com/seanlynch/srl-nodes", + "files": [ + "https://github.com/seanlynch/srl-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: SRL Conditional Interrupt, SRL Format String, SRL Eval, SRL Filter Image List. This is a collection of nodes I find useful. Note that at least one module allows execution of arbitrary code. Do not use any of these nodes on a system that allow untrusted users to control workflows or inputs.[w/WARNING: The custom nodes in this extension are vulnerable to **security risks** because they allow the execution of arbitrary code through the workflow]" + }, + { + "author": "alpertunga-bile", + "title": "prompt-generator", + "reference": "https://github.com/alpertunga-bile/prompt-generator-comfyui", + "files": [ + "https://github.com/alpertunga-bile/prompt-generator-comfyui" + ], + "install_type": "git-clone", + "description": "Custom AI prompt generator node for ComfyUI." + }, + { + "author": "mlinmg", + "title": "LaMa Preprocessor [WIP]", + "reference": "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor", + "files": [ + "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor" + ], + "install_type": "git-clone", + "description": "A LaMa prerocessor for ComfyUI. This preprocessor finally enable users to generate coherent inpaint and outpaint prompt-free. The best results are given on landscapes, not so much in drawings/animation." + }, + { + "author": "kijai", + "title": "KJNodes for ComfyUI", + "reference": "https://github.com/kijai/ComfyUI-KJNodes", + "files": [ + "https://github.com/kijai/ComfyUI-KJNodes" + ], + "install_type": "git-clone", + "description": "Various quality of life -nodes for ComfyUI, mostly just visual stuff to improve usability." + }, + { + "author": "kijai", + "title": "ComfyUI-CCSR", + "reference": "https://github.com/kijai/ComfyUI-CCSR", + "files": [ + "https://github.com/kijai/ComfyUI-CCSR" + ], + "install_type": "git-clone", + "description": "ComfyUI- CCSR upscaler node" + }, + { + "author": "kijai", + "title": "ComfyUI-SVD", + "reference": "https://github.com/kijai/ComfyUI-SVD", + "files": [ + "https://github.com/kijai/ComfyUI-SVD" + ], + "install_type": "git-clone", + "description": "Preliminary use of SVD in ComfyUI.\nNOTE: Quick Implementation, Unstable. See details on repositories." + }, + { + "author": "kijai", + "title": "Marigold depth estimation in ComfyUI", + "reference": "https://github.com/kijai/ComfyUI-Marigold", + "files": [ + "https://github.com/kijai/ComfyUI-Marigold" + ], + "install_type": "git-clone", + "description": "This is a wrapper node for Marigold depth estimation: [https://github.com/prs-eth/Marigold](https://github.com/kijai/ComfyUI-Marigold). Currently using the same diffusers pipeline as in the original implementation, so in addition to the custom node, you need the model in diffusers format.\nNOTE: See details in repo to install." + }, + { + "author": "kijai", + "title": "ComfyUI-DDColor", + "reference": "https://github.com/kijai/ComfyUI-DDColor", + "files": [ + "https://github.com/kijai/ComfyUI-DDColor" + ], + "install_type": "git-clone", + "description": "Node to use [a/DDColor](https://github.com/piddnad/DDColor) in ComfyUI." + }, + { + "author": "Kijai", + "title": "Animatediff MotionLoRA Trainer", + "reference": "https://github.com/kijai/ComfyUI-ADMotionDirector", + "files": [ + "https://github.com/kijai/ComfyUI-ADMotionDirector" + ], + "install_type": "git-clone", + "description": "This is a trainer for AnimateDiff MotionLoRAs, based on the implementation of MotionDirector by ExponentialML." + }, + { + "author": "hhhzzyang", + "title": "Comfyui-Lama", + "reference": "https://github.com/hhhzzyang/Comfyui_Lama", + "files": [ + "https://github.com/hhhzzyang/Comfyui_Lama" + ], + "install_type": "git-clone", + "description": "Nodes: LamaaModelLoad, LamaApply, YamlConfigLoader. a costumer node is realized to remove anything/inpainting anything from a picture by mask inpainting.[w/WARN:This extension includes the entire model, which can result in a very long initial installation time, and there may be some compatibility issues with older dependencies and ComfyUI.]" + }, + { + "author": "thedyze", + "title": "Save Image Extended for ComfyUI", + "reference": "https://github.com/thedyze/save-image-extended-comfyui", + "files": [ + "https://github.com/thedyze/save-image-extended-comfyui" + ], + "install_type": "git-clone", + "description": "Customize the information saved in file- and folder names. Use the values of sampler parameters as part of file or folder names. Save your positive & negative prompt as entries in a JSON (text) file, in each folder." + }, + { + "author": "SOELexicon", + "title": "ComfyUI-LexTools", + "reference": "https://github.com/SOELexicon/ComfyUI-LexTools", + "files": [ + "https://github.com/SOELexicon/ComfyUI-LexTools" + ], + "install_type": "git-clone", + "description": "ComfyUI-LexTools is a Python-based image processing and analysis toolkit that uses machine learning models for semantic image segmentation, image scoring, and image captioning." + }, + { + "author": "mikkel", + "title": "ComfyUI - Text Overlay Plugin", + "reference": "https://github.com/mikkel/ComfyUI-text-overlay", + "files": [ + "https://github.com/mikkel/ComfyUI-text-overlay" + ], + "install_type": "git-clone", + "description": "The ComfyUI Text Overlay Plugin provides functionalities for superimposing text on images. Users can select different font types, set text size, choose color, and adjust the text's position on the image." + }, + { + "author": "avatechai", + "title": "avatar-graph-comfyui", + "reference": "https://github.com/avatechai/avatar-graph-comfyui", + "files": [ + "https://github.com/avatechai/avatar-graph-comfyui" + ], + "install_type": "git-clone", + "description": "Include nodes for sam + bpy operation, that allows workflow creations for generative 2d character rig." + }, + { + "author": "TRI3D-LC", + "title": "tri3d-comfyui-nodes", + "reference": "https://github.com/TRI3D-LC/tri3d-comfyui-nodes", + "files": [ + "https://github.com/TRI3D-LC/tri3d-comfyui-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: tri3d-extract-hand, tri3d-fuzzification, tri3d-position-hands, tri3d-atr-parse." + }, + { + "author": "storyicon", + "title": "segment anything", + "reference": "https://github.com/storyicon/comfyui_segment_anything", + "files": [ + "https://github.com/storyicon/comfyui_segment_anything" + ], + "install_type": "git-clone", + "description": "Based on GroundingDino and SAM, use semantic strings to segment any element in an image. The comfyui version of sd-webui-segment-anything." + }, + { + "author": "a1lazydog", + "title": "ComfyUI-AudioScheduler", + "reference": "https://github.com/a1lazydog/ComfyUI-AudioScheduler", + "files": [ + "https://github.com/a1lazydog/ComfyUI-AudioScheduler" + ], + "install_type": "git-clone", + "description": "Load mp3 files and use the audio nodes to power animations and prompt scheduling. Use with FizzNodes." + }, + { + "author": "whatbirdisthat", + "title": "cyberdolphin", + "reference": "https://github.com/whatbirdisthat/cyberdolphin", + "files": [ + "https://github.com/whatbirdisthat/cyberdolphin" + ], + "install_type": "git-clone", + "description": "Cyberdolphin Suite of ComfyUI nodes for wiring up things." + }, + { + "author": "chrish-slingshot", + "title": "CrasH Utils", + "reference": "https://github.com/chrish-slingshot/CrasHUtils", + "files": [ + "https://github.com/chrish-slingshot/CrasHUtils" + ], + "install_type": "git-clone", + "description": "A mixture of effects and quality of life nodes. Nodes: ImageGlitcher (gives an image a cool glitchy effect), ColorStylizer (highlights a single color in an image), QueryLocalLLM (queries a local LLM API though oobabooga), SDXLReslution (resolution picker for the standard SDXL resolutions, the complete list), SDXLResolutionSplit (splits the SDXL resolution into width and height). " + }, + { + "author": "spinagon", + "title": "ComfyUI-seam-carving", + "reference": "https://github.com/spinagon/ComfyUI-seam-carving", + "files": [ + "https://github.com/spinagon/ComfyUI-seam-carving" + ], + "install_type": "git-clone", + "description": "Nodes: Image Resize (seam carving). Seam carving (image resize) for ComfyUI. Based on [a/https://github.com/li-plus/seam-carving](https://github.com/li-plus/seam-carving). With seam carving algorithm, the image could be intelligently resized while keeping the important contents undistorted. The carving process could be further guided, so that an object could be removed from the image without apparent artifacts." + }, + { + "author": "YMC", + "title": "ymc-node-suite-comfyui", + "reference": "https://github.com/YMC-GitHub/ymc-node-suite-comfyui", + "files": [ + "https://github.com/YMC-GitHub/ymc-node-suite-comfyui" + ], + "install_type": "git-clone", + "description": "ymc 's nodes for comfyui. This extension is composed of nodes that provide various utility features such as text, region, and I/O." + }, + { + "author": "chibiace", + "title": "ComfyUI-Chibi-Nodes", + "reference": "https://github.com/chibiace/ComfyUI-Chibi-Nodes", + "files": [ + "https://github.com/chibiace/ComfyUI-Chibi-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Loader, Prompts, ImageTool, Wildcards, LoadEmbedding, ConditionText, SaveImages, ..." + }, + { + "author": "DigitalIO", + "title": "ComfyUI-stable-wildcards", + "reference": "https://github.com/DigitalIO/ComfyUI-stable-wildcards", + "files": [ + "https://github.com/DigitalIO/ComfyUI-stable-wildcards" + ], + "install_type": "git-clone", + "description": "Wildcard implementation that can be reproduced with workflows." + }, + { + "author": "THtianhao", + "title": "ComfyUI-Portrait-Maker", + "reference": "https://github.com/THtianhao/ComfyUI-Portrait-Maker", + "files": [ + "https://github.com/THtianhao/ComfyUI-Portrait-Maker" + ], + "install_type": "git-clone", + "description": "Nodes:RetainFace, FaceFusion, RatioMerge2Image, MaskMerge2Image, ReplaceBoxImg, ExpandMaskBox, FaceSkin, SkinRetouching, PortraitEnhancement, ..." + }, + { + "author": "THtianhao", + "title": "ComfyUI-FaceChain", + "reference": "https://github.com/THtianhao/ComfyUI-FaceChain", + "files": [ + "https://github.com/THtianhao/ComfyUI-FaceChain" + ], + "install_type": "git-clone", + "description": "The official ComfyUI version of facechain greatly improves the speed of reasoning and has great custom process controls." + }, + { + "author": "zer0TF", + "title": "Cute Comfy", + "reference": "https://github.com/zer0TF/cute-comfy", + "files": [ + "https://github.com/zer0TF/cute-comfy" + ], + "install_type": "git-clone", + "description": "Adds a configurable folder watcher that auto-converts Comfy metadata into a Civitai-friendly format for automatic resource tagging when you upload images. Oh, and it makes your UI awesome, too. 💜" + }, + { + "author": "chflame163", + "title": "ComfyUI_MSSpeech_TTS", + "reference": "https://github.com/chflame163/ComfyUI_MSSpeech_TTS", + "files": [ + "https://github.com/chflame163/ComfyUI_MSSpeech_TTS" + ], + "install_type": "git-clone", + "description": "A text-to-speech plugin used under ComfyUI. It utilizes the Microsoft Speech TTS interface to convert text content into MP3 format audio files." + }, + { + "author": "chflame163", + "title": "ComfyUI_WordCloud", + "reference": "https://github.com/chflame163/ComfyUI_WordCloud", + "files": [ + "https://github.com/chflame163/ComfyUI_WordCloud" + ], + "install_type": "git-clone", + "description": "Nodes:Word Cloud, Load Text File" + }, + { + "author": "drustan-hawk", + "title": "primitive-types", + "reference": "https://github.com/drustan-hawk/primitive-types", + "files": [ + "https://github.com/drustan-hawk/primitive-types" + ], + "install_type": "git-clone", + "description": "This repository contains typed primitives for ComfyUI. The motivation for these primitives is that the standard primitive node cannot be routed." + }, + { + "author": "shadowcz007", + "title": "comfyui-mixlab-nodes", + "reference": "https://github.com/shadowcz007/comfyui-mixlab-nodes", + "files": [ + "https://github.com/shadowcz007/comfyui-mixlab-nodes" + ], + "install_type": "git-clone", + "description": "3D, ScreenShareNode & FloatingVideoNode, SpeechRecognition & SpeechSynthesis, GPT, LoadImagesFromLocal, Layers, Other Nodes, ..." + }, + { + "author": "shadowcz007", + "title": "comfyui-ultralytics-yolo", + "reference": "https://github.com/shadowcz007/comfyui-ultralytics-yolo", + "files": [ + "https://github.com/shadowcz007/comfyui-ultralytics-yolo" + ], + "install_type": "git-clone", + "description": "Nodes:Detect By Label." + }, + { + "author": "shadowcz007", + "title": "Consistency Decoder", + "reference": "https://github.com/shadowcz007/comfyui-consistency-decoder", + "files": [ + "https://github.com/shadowcz007/comfyui-consistency-decoder" + ], + "install_type": "git-clone", + "description": "[a/openai Consistency Decoder](https://github.com/openai/consistencydecoder). After downloading the [a/OpenAI VAE model](https://openaipublic.azureedge.net/diff-vae/c9cebd3132dd9c42936d803e33424145a748843c8f716c0814838bdc8a2fe7cb/decoder.pt), place it in the `model/vae` directory for use." + }, + { + "author": "ostris", + "title": "Ostris Nodes ComfyUI", + "reference": "https://github.com/ostris/ostris_nodes_comfyui", + "files": [ + "https://github.com/ostris/ostris_nodes_comfyui" + ], + "install_type": "git-clone", + "nodename_pattern": "- Ostris$", + "description": "This is a collection of custom nodes for ComfyUI that I made for some QOL. I will be adding much more advanced ones in the future once I get more familiar with the API." + }, + { + "author": "0xbitches", + "title": "Latent Consistency Model for ComfyUI", + "reference": "https://github.com/0xbitches/ComfyUI-LCM", + "files": [ + "https://github.com/0xbitches/ComfyUI-LCM" + ], + "install_type": "git-clone", + "description": "This custom node implements a Latent Consistency Model sampler in ComfyUI. (LCM)" + }, + { + "author": "aszc-dev", + "title": "Core ML Suite for ComfyUI", + "reference": "https://github.com/aszc-dev/ComfyUI-CoreMLSuite", + "files": [ + "https://github.com/aszc-dev/ComfyUI-CoreMLSuite" + ], + "install_type": "git-clone", + "description": "This extension contains a set of custom nodes for ComfyUI that allow you to use Core ML models in your ComfyUI workflows. The models can be obtained here, or you can convert your own models using coremltools. The main motivation behind using Core ML models in ComfyUI is to allow you to utilize the ANE (Apple Neural Engine) on Apple Silicon (M1/M2) machines to improve performance." + }, + { + "author": "taabata", + "title": "Syrian Falcon Nodes", + "reference": "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes", + "files": [ + "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes/raw/main/SyrianFalconNodes.py" + ], + "install_type": "copy", + "description": "Nodes:Prompt editing, Word as Image" + }, + { + "author": "taabata", + "title": "LCM_Inpaint-Outpaint_Comfy", + "reference": "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy", + "files": [ + "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy" + ], + "install_type": "git-clone", + "description": "ComfyUI custom nodes for inpainting/outpainting using the new latent consistency model (LCM)" + }, + { + "author": "noxinias", + "title": "ComfyUI_NoxinNodes", + "reference": "https://github.com/noxinias/ComfyUI_NoxinNodes", + "files": [ + "https://github.com/noxinias/ComfyUI_NoxinNodes" + ], + "install_type": "git-clone", + "description": "Nodes: Noxin Complete Chime, Noxin Scaled Resolutions, Load from Noxin Prompt Library, Save to Noxin Prompt Library" + }, + { + "author": "apesplat", + "title": "ezXY scripts and nodes", + "reference": "https://github.com/GMapeSplat/ComfyUI_ezXY", + "files": [ + "https://github.com/GMapeSplat/ComfyUI_ezXY" + ], + "install_type": "git-clone", + "description": "Extensions/Patches: Enables linking float and integer inputs and ouputs. Values are automatically cast to the correct type and clamped to the correct range. Works with both builtin and custom nodes.[w/NOTE: This repo patches ComfyUI's validate_inputs and map_node_over_list functions while running. May break depending on your version of ComfyUI. Can be deactivated in config.yaml.]Nodes: A collection of nodes for facilitating the generation of XY plots. Capable of plotting changes over most primitive values." + }, + { + "author": "kinfolk0117", + "title": "SimpleTiles", + "reference": "https://github.com/kinfolk0117/ComfyUI_SimpleTiles", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_SimpleTiles" + ], + "install_type": "git-clone", + "description": "Nodes:TileSplit, TileMerge." + }, + { + "author": "kinfolk0117", + "title": "ComfyUI_GradientDeepShrink", + "reference": "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink" + ], + "install_type": "git-clone", + "description": "Nodes:GradientPatchModelAddDownscale (Kohya Deep Shrink)." + }, + { + "author": "kinfolk0117", + "title": "TiledIPAdapter", + "reference": "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter" + ], + "install_type": "git-clone", + "description": "Proof of concent on how to use IPAdapter to control tiled upscaling. NOTE: You need to have 'ComfyUI_IPAdapter_plus' installed." + }, + { + "author": "kinfolk0117", + "title": "ComfyUI_Pilgram", + "reference": "https://github.com/kinfolk0117/ComfyUI_Pilgram", + "files": [ + "https://github.com/kinfolk0117/ComfyUI_Pilgram" + ], + "install_type": "git-clone", + "description": "Use [a/Pilgram2](https://github.com/mgineer85/pilgram2) filters in ComfyUI" + }, + { + "author": "Fictiverse", + "title": "ComfyUI Fictiverse Nodes", + "reference": "https://github.com/Fictiverse/ComfyUI_Fictiverse", + "files": [ + "https://github.com/Fictiverse/ComfyUI_Fictiverse" + ], + "install_type": "git-clone", + "description": "Nodes:Color correction." + }, + { + "author": "idrirap", + "title": "ComfyUI-Lora-Auto-Trigger-Words", + "reference": "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words", + "files": [ + "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words" + ], + "install_type": "git-clone", + "description": "This project is a fork of [a/https://github.com/Extraltodeus/LoadLoraWithTags](https://github.com/Extraltodeus/LoadLoraWithTags) The aim of these custom nodes is to get an easy access to the tags used to trigger a lora." + }, + { + "author": "aianimation55", + "title": "Comfy UI FatLabels", + "reference": "https://github.com/aianimation55/ComfyUI-FatLabels", + "files": [ + "https://github.com/aianimation55/ComfyUI-FatLabels" + ], + "install_type": "git-clone", + "description": "It's a super simple custom node for Comfy UI, to generate text, with a font size option. Useful for bigger labelling of nodes, helpful for wider screen captures or tutorials. Plus you can of course use the text within your generations." + }, + { + "author": "noEmbryo", + "title": "noEmbryo nodes", + "reference": "https://github.com/noembryo/ComfyUI-noEmbryo", + "files": [ + "https://github.com/noembryo/ComfyUI-noEmbryo" + ], + "install_type": "git-clone", + "description": "PromptTermList (1-6): are some nodes that help with the creation of Prompts inside ComfyUI. Resolution Scale outputs image dimensions using a scale factor. Regex Text Chopper outputs the chopped parts of a text using RegEx." + }, + { + "author": "mikkel", + "title": "ComfyUI - Mask Bounding Box", + "reference": "https://github.com/mikkel/comfyui-mask-boundingbox", + "files": [ + "https://github.com/mikkel/comfyui-mask-boundingbox" + ], + "install_type": "git-clone", + "description": "The ComfyUI Mask Bounding Box Plugin provides functionalities for selecting a specific size mask from an image. Can be combined with ClipSEG to replace any aspect of an SDXL image with an SD1.5 output." + }, + { + "author": "ParmanBabra", + "title": "ComfyUI-Malefish-Custom-Scripts", + "reference": "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts", + "files": [ + "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts" + ], + "install_type": "git-clone", + "description": "Nodes:Multi Lora Loader, Random (Prompt), Combine (Prompt), CSV Prompts Loader" + }, + { + "author": "IAmMatan.com", + "title": "ComfyUI Serving toolkit", + "reference": "https://github.com/matan1905/ComfyUI-Serving-Toolkit", + "files": [ + "https://github.com/matan1905/ComfyUI-Serving-Toolkit" + ], + "install_type": "git-clone", + "description": "This extension adds nodes that allow you to easily serve your workflow (for example using a discord bot) " + }, + { + "author": "PCMonsterx", + "title": "ComfyUI-CSV-Loader", + "reference": "https://github.com/PCMonsterx/ComfyUI-CSV-Loader", + "files": [ + "https://github.com/PCMonsterx/ComfyUI-CSV-Loader" + ], + "install_type": "git-clone", + "description": "CSV Loader for prompt building within ComfyUI interface. Allows access to positive/negative prompts associated with a name. Selections are being pulled from CSV files." + }, + { + "author": "Trung0246", + "title": "ComfyUI-0246", + "reference": "https://github.com/Trung0246/ComfyUI-0246", + "files": [ + "https://github.com/Trung0246/ComfyUI-0246" + ], + "install_type": "git-clone", + "description": "Random nodes for ComfyUI I made to solve my struggle with ComfyUI (ex: pipe, process). Have varying quality." + }, + { + "author": "fexli", + "title": "fexli-util-node-comfyui", + "reference": "https://github.com/fexli/fexli-util-node-comfyui", + "files": [ + "https://github.com/fexli/fexli-util-node-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes:FEImagePadForOutpaint, FEColorOut, FEColor2Image, FERandomizedColor2Image" + }, + { + "author": "AbyssYuan0", + "title": "ComfyUI_BadgerTools", + "reference": "https://github.com/AbyssYuan0/ComfyUI_BadgerTools", + "files": [ + "https://github.com/AbyssYuan0/ComfyUI_BadgerTools" + ], + "install_type": "git-clone", + "description": "Nodes:ImageOverlap-badger, FloatToInt-badger, IntToString-badger, FloatToString-badger, ImageNormalization-badger, ImageScaleToSide-badger, NovelToFizz-badger." + }, + { + "author": "palant", + "title": "Image Resize for ComfyUI", + "reference": "https://github.com/palant/image-resize-comfyui", + "files": [ + "https://github.com/palant/image-resize-comfyui" + ], + "install_type": "git-clone", + "description": "This custom node provides various tools for resizing images. The goal is resizing without distorting proportions, yet without having to perform any calculations with the size of the original image. If a mask is present, it is resized and modified along with the image." + }, + { + "author": "palant", + "title": "Integrated Nodes for ComfyUI", + "reference": "https://github.com/palant/integrated-nodes-comfyui", + "files": [ + "https://github.com/palant/integrated-nodes-comfyui" + ], + "install_type": "git-clone", + "description": "This tool will turn entire workflows or parts of them into single integrated nodes. In a way, it is similar to the Node Templates functionality but hides the inner structure. This is useful if all you want is to reuse and quickly configure a bunch of nodes without caring how they are interconnected." + }, + { + "author": "palant", + "title": "Extended Save Image for ComfyUI", + "reference": "https://github.com/palant/extended-saveimage-comfyui", + "files": [ + "https://github.com/palant/extended-saveimage-comfyui" + ], + "install_type": "git-clone", + "description": "This custom node is largely identical to the usual Save Image but allows saving images also in JPEG and WEBP formats, the latter with both lossless and lossy compression. Metadata is embedded in the images as usual, and the resulting images can be used to load a workflow." + }, + { + "author": "whmc76", + "title": "ComfyUI-Openpose-Editor-Plus", + "reference": "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus", + "files": [ + "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus" + ], + "install_type": "git-clone", + "description": "Nodes:Openpose Editor Plus" + }, + { + "author": "martijnat", + "title": "comfyui-previewlatent", + "reference": "https://github.com/martijnat/comfyui-previewlatent", + "files": [ + "https://github.com/martijnat/comfyui-previewlatent" + ], + "install_type": "git-clone", + "description": "a ComfyUI plugin for previewing latents without vae decoding. Useful for showing intermediate results and can be used a faster 'preview image' if you don't wan't to use vae decode." + }, + { + "author": "banodoco", + "title": "Steerable Motion", + "reference": "https://github.com/banodoco/steerable-motion", + "files": [ + "https://github.com/banodoco/steerable-motion" + ], + "install_type": "git-clone", + "description": "Steerable Motion is a ComfyUI node for batch creative interpolation. Our goal is to feature the best methods for steering motion with images as video models evolve." + }, + { + "author": "gemell1", + "title": "ComfyUI_GMIC", + "reference": "https://github.com/gemell1/ComfyUI_GMIC", + "files": [ + "https://github.com/gemell1/ComfyUI_GMIC" + ], + "install_type": "git-clone", + "description": "Nodes:GMIC Image Processing." + }, + { + "author": "LonicaMewinsky", + "title": "ComfyBreakAnim", + "reference": "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame", + "files": [ + "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame" + ], + "install_type": "git-clone", + "description": "Nodes:BreakFrames, GetKeyFrames, MakeGrid." + }, + { + "author": "TheBarret", + "title": "ZSuite", + "reference": "https://github.com/TheBarret/ZSuite", + "files": [ + "https://github.com/TheBarret/ZSuite" + ], + "install_type": "git-clone", + "description": "Nodes:Prompter, RF Noise, SeedMod." + }, + { + "author": "romeobuilderotti", + "title": "ComfyUI PNG Metadata", + "reference": "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata", + "files": [ + "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata" + ], + "install_type": "git-clone", + "description": "Add custom Metadata fields to your saved PNG files." + }, + { + "author": "ka-puna", + "title": "comfyui-yanc", + "reference": "https://github.com/ka-puna/comfyui-yanc", + "files": [ + "https://github.com/ka-puna/comfyui-yanc" + ], + "install_type": "git-clone", + "description": "NOTE: Concatenate Strings, Format Datetime String, Integer Caster, Multiline String, Truncate String. Yet Another Node Collection, a repository of simple nodes for ComfyUI. This repository eases the addition or removal of custom nodes to itself." + }, + { + "author": "amorano", + "title": "Jovimetrix Composition Nodes", + "reference": "https://github.com/Amorano/Jovimetrix", + "files": [ + "https://github.com/Amorano/Jovimetrix" + ], + "nodename_pattern": " \\(jov\\)$", + "install_type": "git-clone", + "description": "Compose like Substance Designer. Webcams, Media Streams (in/out), Tick animation, Color correction, Geometry manipulation, Pixel shader, Polygonal shape generator, Remap images gometry and color, Heavily inspired by WAS and MTB Node Suites." + }, + { + "author": "Umikaze-job", + "title": "select_folder_path_easy", + "reference": "https://github.com/Umikaze-job/select_folder_path_easy", + "files": [ + "https://github.com/Umikaze-job/select_folder_path_easy" + ], + "install_type": "git-clone", + "description": "This extension simply connects the nodes and specifies the output path of the generated images to a manageable path." + }, + { + "author": "Niutonian", + "title": "ComfyUi-NoodleWebcam", + "reference": "https://github.com/Niutonian/ComfyUi-NoodleWebcam", + "files": [ + "https://github.com/Niutonian/ComfyUi-NoodleWebcam" + ], + "install_type": "git-clone", + "description": "Nodes:Noodle webcam is a node that records frames and send them to your favourite node." + }, + { + "author": "Feidorian", + "title": "feidorian-ComfyNodes", + "reference": "https://github.com/Feidorian/feidorian-ComfyNodes", + "nodename_pattern": "^Feidorian_", + "files": [ + "https://github.com/Feidorian/feidorian-ComfyNodes" + ], + "install_type": "git-clone", + "description": "This extension provides various custom nodes. literals, loaders, logic, output, switches" + }, + { + "author": "wutipong", + "title": "ComfyUI-TextUtils", + "reference": "https://github.com/wutipong/ComfyUI-TextUtils", + "files": [ + "https://github.com/wutipong/ComfyUI-TextUtils" + ], + "install_type": "git-clone", + "description": "Nodes:Create N-Token String" + }, + { + "author": "natto-maki", + "title": "ComfyUI-NegiTools", + "reference": "https://github.com/natto-maki/ComfyUI-NegiTools", + "files": [ + "https://github.com/natto-maki/ComfyUI-NegiTools" + ], + "install_type": "git-clone", + "description": "Nodes:OpenAI DALLe3, OpenAI Translate to English, String Function, Seed Generator" + }, + { + "author": "LonicaMewinsky", + "title": "ComfyUI-RawSaver", + "reference": "https://github.com/LonicaMewinsky/ComfyUI-RawSaver", + "files": [ + "https://github.com/LonicaMewinsky/ComfyUI-RawSaver" + ], + "install_type": "git-clone", + "description": "Nodes:SaveTifImage. ComfyUI custom node for purpose of saving image as uint16 tif file." + }, + { + "author": "jojkaart", + "title": "ComfyUI-sampler-lcm-alternative", + "reference": "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative", + "files": [ + "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative" + ], + "install_type": "git-clone", + "description": "Nodes:LCMScheduler, SamplerLCMAlternative, SamplerLCMCycle. ComfyUI Custom Sampler nodes that add a new improved LCM sampler functions" + }, + { + "author": "GTSuya-Studio", + "title": "ComfyUI-GTSuya-Nodes", + "reference": "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes", + "files": [ + "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes" + ], + "install_type": "git-clone", + "description": "ComfyUI-GTSuya-Nodes is a ComfyUI extension designed to add several wildcards supports into ComfyUI. Wildcards allow you to use __name__ syntax in your prompt to get a random line from a file named name.txt in a wildcards directory." + }, + { + "author": "oyvindg", + "title": "ComfyUI-TrollSuite", + "reference": "https://github.com/oyvindg/ComfyUI-TrollSuite", + "files": [ + "https://github.com/oyvindg/ComfyUI-TrollSuite" + ], + "install_type": "git-clone", + "description": "Nodes: BinaryImageMask, ImagePadding, LoadLastCreatedImage, RandomMask, TransparentImage." + }, + { + "author": "drago87", + "title": "ComfyUI_Dragos_Nodes", + "reference": "https://github.com/drago87/ComfyUI_Dragos_Nodes", + "files": [ + "https://github.com/drago87/ComfyUI_Dragos_Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:File Padding, Image Info, VAE Loader With Name" + }, + { + "author": "ansonkao", + "title": "comfyui-geometry", + "reference": "https://github.com/ansonkao/comfyui-geometry", + "files": [ + "https://github.com/ansonkao/comfyui-geometry" + ], + "install_type": "git-clone", + "description": "Nodes: Mask to Centroid, Mask to Eigenvector. A small collection of custom nodes for use with ComfyUI, for geometry calculations" + }, + { + "author": "bronkula", + "title": "comfyui-fitsize", + "reference": "https://github.com/bronkula/comfyui-fitsize", + "files": [ + "https://github.com/bronkula/comfyui-fitsize" + ], + "install_type": "git-clone", + "description": "Nodes:Fit Size From Int/Image/Resize, Load Image And Resize To Fit, Pick Image From Batch/List, Crop Image Into Even Pieces, Image Region To Mask... A simple set of nodes for making an image fit within a bounding box" + }, + { + "author": "toyxyz", + "title": "ComfyUI_toyxyz_test_nodes", + "reference": "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes", + "files": [ + "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes" + ], + "install_type": "git-clone", + "description": "This node was created to send a webcam to ComfyUI in real time. This node is recommended for use with LCM." + }, + { + "author": "thecooltechguy", + "title": "ComfyUI Stable Video Diffusion", + "reference": "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion" + ], + "install_type": "git-clone", + "description": "Easily use Stable Video Diffusion inside ComfyUI!" + }, + { + "author": "Danand", + "title": "ComfyUI-ComfyCouple", + "reference": "https://github.com/Danand/ComfyUI-ComfyCouple", + "files": [ + "https://github.com/Danand/ComfyUI-ComfyCouple" + ], + "install_type": "git-clone", + "description": " If you want to draw two different characters together without blending their features, so you could try to check out this custom node." + }, + { + "author": "42lux", + "title": "ComfyUI-safety-checker", + "reference": "https://github.com/42lux/ComfyUI-safety-checker", + "files": [ + "https://github.com/42lux/ComfyUI-safety-checker" + ], + "install_type": "git-clone", + "description": "A NSFW/Safety Checker Node for ComfyUI." + }, + { + "author": "sergekatzmann", + "title": "ComfyUI_Nimbus-Pack", + "reference": "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack", + "files": [ + "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack" + ], + "install_type": "git-clone", + "description": "Nodes:Image Square Adapter Node, Image Resize And Crop Node" + }, + { + "author": "komojini", + "title": "ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes", + "reference": "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes", + "files": [ + "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes" + ], + "install_type": "git-clone", + "description": "Nodes:XL DreamBooth LoRA, S3 Bucket LoRA" + }, + { + "author": "komojini", + "title": "komojini-comfyui-nodes", + "reference": "https://github.com/komojini/komojini-comfyui-nodes", + "files": [ + "https://github.com/komojini/komojini-comfyui-nodes" + ], + "install_type": "git-clone", + "description": "Nodes:YouTube Video Loader. Custom ComfyUI Nodes for video generation" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-Text_Image-Composite [WIP]", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite" + ], + "install_type": "git-clone", + "description": "Nodes:Text_Image_Zho, Text_Image_Multiline_Zho, RGB_Image_Zho, AlphaChanelAddByMask, ImageComposite_Zho, ..." + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-Gemini", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini" + ], + "install_type": "git-clone", + "description": "Using Gemini-pro & Gemini-pro-vision in ComfyUI." + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "comfyui-portrait-master-zh-cn", + "reference": "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn" + ], + "install_type": "git-clone", + "description": "ComfyUI Portrait Master 简体中文版." + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-Q-Align", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align" + ], + "install_type": "git-clone", + "description": "Nodes:Q-Align Scoring. Implementation of [a/Q-Align](https://arxiv.org/abs/2312.17090) for ComfyUI" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-InstantID", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID" + ], + "install_type": "git-clone", + "description": "Unofficial implementation of [a/InstantID](https://github.com/InstantID/InstantID) for ComfyUI" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI PhotoMaker (ZHO)", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO" + ], + "install_type": "git-clone", + "description": "Unofficial implementation of [a/PhotoMaker](https://github.com/TencentARC/PhotoMaker) for ComfyUI" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-Qwen-VL-API", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API" + ], + "install_type": "git-clone", + "description": "QWen-VL-Plus & QWen-VL-Max in ComfyUI" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-SVD-ZHO (WIP)", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO" + ], + "install_type": "git-clone", + "description": "My Workflows + Auxiliary nodes for Stable Video Diffusion (SVD)" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI SegMoE", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE" + ], + "install_type": "git-clone", + "description": "Unofficial implementation of [a/SegMoE: Segmind Mixture of Diffusion Experts](https://github.com/segmind/segmoe) for ComfyUI" + }, + { + "author": "kenjiqq", + "title": "qq-nodes-comfyui", + "reference": "https://github.com/kenjiqq/qq-nodes-comfyui", + "files": [ + "https://github.com/kenjiqq/qq-nodes-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes:Any List, Image Accumulator Start, Image Accumulator End, Load Lines From Text File, XY Grid Helper, Slice List, Axis To String/Int/Float/Model, ..." + }, + { + "author": "80sVectorz", + "title": "ComfyUI-Static-Primitives", + "reference": "https://github.com/80sVectorz/ComfyUI-Static-Primitives", + "files": [ + "https://github.com/80sVectorz/ComfyUI-Static-Primitives" + ], + "install_type": "git-clone", + "description": "Adds Static Primitives to ComfyUI. Mostly to work with reroute nodes" + }, + { + "author": "AbdullahAlfaraj", + "title": "Comfy-Photoshop-SD", + "reference": "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD", + "files": [ + "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD" + ], + "install_type": "git-clone", + "description": "Nodes: load Image with metadata, get config data, load image from base64 string, Load Loras From Prompt, Generate Latent Noise, Combine Two Latents Into Batch, General Purpose Controlnet Unit, ControlNet Script, Content Mask Latent, Auto-Photoshop-SD Seed, Expand and Blur the Mask" + }, + { + "author": "zhuanqianfish", + "title": "EasyCaptureNode for ComfyUI", + "reference": "https://github.com/zhuanqianfish/ComfyUI-EasyNode", + "files": [ + "https://github.com/zhuanqianfish/ComfyUI-EasyNode" + ], + "install_type": "git-clone", + "description": "Capture window content from other programs, easyway combined with LCM for real-time painting" + }, + { + "author": "discopixel-studio", + "title": "ComfyUI Discopixel Nodes", + "reference": "https://github.com/discopixel-studio/comfyui-discopixel", + "files": [ + "https://github.com/discopixel-studio/comfyui-discopixel" + ], + "install_type": "git-clone", + "description": "Nodes:TransformTemplateOntoFaceMask, ... A small collection of custom nodes for use with ComfyUI, by Discopixel" + }, + { + "author": "zcfrank1st", + "title": "ComfyUI Yolov8", + "reference": "https://github.com/zcfrank1st/Comfyui-Yolov8", + "files": [ + "https://github.com/zcfrank1st/Comfyui-Yolov8" + ], + "install_type": "git-clone", + "description": "Nodes: Yolov8Detection, Yolov8Segmentation. Deadly simple yolov8 comfyui plugin" + }, + { + "author": "SoftMeng", + "title": "ComfyUI_Mexx_Styler", + "reference": "https://github.com/SoftMeng/ComfyUI_Mexx_Styler", + "files": [ + "https://github.com/SoftMeng/ComfyUI_Mexx_Styler" + ], + "install_type": "git-clone", + "description": "Nodes: ComfyUI Mexx Styler, ComfyUI Mexx Styler Advanced" + }, + { + "author": "SoftMeng", + "title": "ComfyUI_Mexx_Poster", + "reference": "https://github.com/SoftMeng/ComfyUI_Mexx_Poster", + "files": [ + "https://github.com/SoftMeng/ComfyUI_Mexx_Poster" + ], + "install_type": "git-clone", + "description": "Nodes: ComfyUI_Mexx_Poster" + }, + { + "author": "wmatson", + "title": "easy-comfy-nodes", + "reference": "https://github.com/wmatson/easy-comfy-nodes", + "files": [ + "https://github.com/wmatson/easy-comfy-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: HTTP POST, Empty Dict, Assoc Str, Assoc Dict, Assoc Img, Load Img From URL (EZ), Load Img Batch From URLs (EZ), Video Combine + upload (EZ), ..." + }, + { + "author": "DrJKL", + "title": "ComfyUI-Anchors", + "reference": "https://github.com/DrJKL/ComfyUI-Anchors", + "files": [ + "https://github.com/DrJKL/ComfyUI-Anchors" + ], + "install_type": "git-clone", + "description": "A ComfyUI extension to add spatial anchors/waypoints to better navigate large workflows." + }, + { + "author": "vanillacode314", + "title": "Simple Wildcard", + "reference": "https://github.com/vanillacode314/SimpleWildcardsComfyUI", + "files": ["https://github.com/vanillacode314/SimpleWildcardsComfyUI"], + "install_type": "git-clone", + "pip": ["pipe"], + "description": "A simple wildcard node for ComfyUI. Can also be used a style prompt node." + }, + { + "author": "WebDev9000", + "title": "WebDev9000-Nodes", + "reference": "https://github.com/WebDev9000/WebDev9000-Nodes", + "files": [ + "https://github.com/WebDev9000/WebDev9000-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Ignore Braces, Settings Switch." + }, + { + "author": "Scholar01", + "title": "SComfyUI-Keyframe", + "reference": "https://github.com/Scholar01/ComfyUI-Keyframe", + "files": [ + "https://github.com/Scholar01/ComfyUI-Keyframe" + ], + "install_type": "git-clone", + "description": "Nodes:Keyframe Part, Keyframe Interpolation Part, Keyframe Apply." + }, + { + "author": "Haoming02", + "title": "ComfyUI Diffusion Color Grading", + "reference": "https://github.com/Haoming02/comfyui-diffusion-cg", + "files": [ + "https://github.com/Haoming02/comfyui-diffusion-cg" + ], + "install_type": "git-clone", + "description": "This is the ComfyUI port of the joint research between me and TimothyAlexisVass. For more information, check out the original [a/Extension](https://github.com/Haoming02/sd-webui-diffusion-cg) for Automatic1111." + }, + { + "author": "Haoming02", + "title": "comfyui-prompt-format", + "reference": "https://github.com/Haoming02/comfyui-prompt-format", + "files": [ + "https://github.com/Haoming02/comfyui-prompt-format" + ], + "install_type": "git-clone", + "description": "This is an Extension for ComfyUI, which helps formatting texts." + }, + { + "author": "Haoming02", + "title": "ComfyUI Clear Screen", + "reference": "https://github.com/Haoming02/comfyui-clear-screen", + "files": [ + "https://github.com/Haoming02/comfyui-clear-screen" + ], + "install_type": "git-clone", + "description": "This is an Extension for ComfyUI, which adds a button, CLS, to clear the console window." + }, + { + "author": "Haoming02", + "title": "ComfyUI Menu Anchor", + "reference": "https://github.com/Haoming02/comfyui-menu-anchor", + "files": [ + "https://github.com/Haoming02/comfyui-menu-anchor" + ], + "install_type": "git-clone", + "description": "This is an Extension for ComfyUI, which moves the menu to the specified corner on startup." + }, + { + "author": "Haoming02", + "title": "ComfyUI Tab Handler", + "reference": "https://github.com/Haoming02/comfyui-tab-handler", + "files": [ + "https://github.com/Haoming02/comfyui-tab-handler" + ], + "install_type": "git-clone", + "description": "This is an Extension for ComfyUI, which moves the menu to the specified corner on startup." + }, + { + "author": "Haoming02", + "title": "ComfyUI Floodgate", + "reference": "https://github.com/Haoming02/comfyui-floodgate", + "files": [ + "https://github.com/Haoming02/comfyui-floodgate" + ], + "install_type": "git-clone", + "description": "This is an Extension for ComfyUI, which allows you to control the logic flow with just one click!" + }, + { + "author": "bedovyy", + "title": "ComfyUI_NAIDGenerator", + "reference": "https://github.com/bedovyy/ComfyUI_NAIDGenerator", + "files": [ + "https://github.com/bedovyy/ComfyUI_NAIDGenerator" + ], + "install_type": "git-clone", + "description": "This extension helps generate images through NAI." + }, + { + "author": "Off-Live", + "title": "ComfyUI-off-suite", + "reference": "https://github.com/Off-Live/ComfyUI-off-suite", + "files": [ + "https://github.com/Off-Live/ComfyUI-off-suite" + ], + "install_type": "git-clone", + "description": "Nodes:Image Crop Fit, OFF SEGS to Image, Crop Center wigh SEGS, Watermarking, GW Number Formatting Node." + }, + { + "author": "ningxiaoxiao", + "title": "comfyui-NDI", + "reference": "https://github.com/ningxiaoxiao/comfyui-NDI", + "files": [ + "https://github.com/ningxiaoxiao/comfyui-NDI" + ], + "pip": ["ndi-python"], + "install_type": "git-clone", + "description": "Real-time input output node for ComfyUI by NDI. Leveraging the powerful linking capabilities of NDI, you can access NDI video stream frames and send images generated by the model to NDI video streams." + }, + { + "author": "subtleGradient", + "title": "Touchpad two-finger gesture support for macOS", + "reference": "https://github.com/subtleGradient/TinkerBot-tech-for-ComfyUI-Touchpad", + "files": [ + "https://github.com/subtleGradient/TinkerBot-tech-for-ComfyUI-Touchpad" + ], + "install_type": "git-clone", + "description": "Two-finger scrolling (vertical and horizontal) to pan the canvas. Two-finger pinch to zoom in and out. Command-scroll up and down to zoom in and out. Fixes [comfyanonymous/ComfyUI#2059](https://github.com/comfyanonymous/ComfyUI/issues/2059)." + }, + { + "author": "zcfrank1st", + "title": "comfyui_visual_anagram", + "reference": "https://github.com/zcfrank1st/comfyui_visual_anagrams", + "files": [ + "https://github.com/zcfrank1st/comfyui_visual_anagrams" + ], + "install_type": "git-clone", + "description": "Nodes:visual_anagrams_sample, visual_anagrams_animate" + }, + { + "author": "Electrofried", + "title": "OpenAINode", + "reference": "https://github.com/Electrofried/ComfyUI-OpenAINode", + "files": [ + "https://github.com/Electrofried/ComfyUI-OpenAINode" + ], + "install_type": "git-clone", + "description": "A simply node for hooking in to openAI API based servers via comfyUI" + }, + { + "author": "AustinMroz", + "title": "SpliceTools", + "reference": "https://github.com/AustinMroz/ComfyUI-SpliceTools", + "files": [ + "https://github.com/AustinMroz/ComfyUI-SpliceTools" + ], + "install_type": "git-clone", + "description": "Experimental utility nodes with a focus on manipulation of noised latents" + }, + { + "author": "11cafe", + "title": "ComfyUI Workspace Manager - Comfyspace", + "reference": "https://github.com/11cafe/comfyui-workspace-manager", + "files": [ + "https://github.com/11cafe/comfyui-workspace-manager" + ], + "install_type": "git-clone", + "description": "A ComfyUI custom node for project management to centralize the management of all your workflows in one place. Seamlessly switch between workflows, create and update them within a single workspace, like Google Docs." + }, + { + "author": "knuknX", + "title": "ComfyUI-Image-Tools", + "reference": "https://github.com/knuknX/ComfyUI-Image-Tools", + "files": [ + "https://github.com/knuknX/ComfyUI-Image-Tools" + ], + "install_type": "git-clone", + "description": "Nodes:BatchImageResizeProcessor, SingleImagePathLoader, SingleImageUrlLoader" + }, + { + "author": "jtrue", + "title": "ComfyUI-JaRue", + "reference": "https://github.com/jtrue/ComfyUI-JaRue", + "files": [ + "https://github.com/jtrue/ComfyUI-JaRue" + ], + "nodename_pattern": "_jru$", + "install_type": "git-clone", + "description": "A collection of nodes powering a tensor oracle on a home network with automation" + }, + { + "author": "filliptm", + "title": "ComfyUI_Fill-Nodes", + "reference": "https://github.com/filliptm/ComfyUI_Fill-Nodes", + "files": [ + "https://github.com/filliptm/ComfyUI_Fill-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:FL Image Randomizer. The start of a pack that I will continue to build out to fill the gaps of nodes and functionality that I feel is missing in comfyUI" + }, + { + "author": "zfkun", + "title": "ComfyUI_zfkun", + "reference": "https://github.com/zfkun/ComfyUI_zfkun", + "files": [ + "https://github.com/zfkun/ComfyUI_zfkun" + ], + "install_type": "git-clone", + "description": "A collection of nodes for common tools, including text preview, text translation (multi-platform, multi-language), image loader, webcamera capture." + }, + { + "author": "80sVectorz", + "title": "ComfyUI-Static-Primitives", + "reference": "https://github.com/80sVectorz/ComfyUI-Static-Primitives", + "files": [ + "https://github.com/80sVectorz/ComfyUI-Static-Primitives" + ], + "install_type": "git-clone", + "description": "Adds Static Primitives to ComfyUI. Mostly to work with reroute nodes" + }, + { + "author": "zcfrank1st", + "title": "Comfyui-Toolbox", + "reference": "https://github.com/zcfrank1st/Comfyui-Toolbox", + "files": [ + "https://github.com/zcfrank1st/Comfyui-Toolbox" + ], + "install_type": "git-clone", + "description": "Nodes:Preview Json, Save Json, Test Json Preview, ... preview and save nodes" + }, + { + "author": "talesofai", + "title": "ComfyUI Browser", + "reference": "https://github.com/talesofai/comfyui-browser", + "files": [ + "https://github.com/talesofai/comfyui-browser" + ], + "install_type": "git-clone", + "description": "This is an image/video/workflow browser and manager for ComfyUI. You could add image/video/workflow to collections and load it to ComfyUI. You will be able to use your collections everywhere." + }, + { + "author": "yolain", + "title": "ComfyUI Easy Use", + "reference": "https://github.com/yolain/ComfyUI-Easy-Use", + "files": [ + "https://github.com/yolain/ComfyUI-Easy-Use" + ], + "install_type": "git-clone", + "description": "To enhance the usability of ComfyUI, optimizations and integrations have been implemented for several commonly used nodes." + }, + { + "author": "bruefire", + "title": "ComfyUI Sequential Image Loader", + "reference": "https://github.com/bruefire/ComfyUI-SeqImageLoader", + "files": [ + "https://github.com/bruefire/ComfyUI-SeqImageLoader" + ], + "install_type": "git-clone", + "description": "This is an extension node for ComfyUI that allows you to load frames from a video in bulk and perform masking and sketching on each frame through a GUI." + }, + { + "author": "mmaker", + "title": "Color Enhance", + "reference": "https://git.mmaker.moe/mmaker/sd-webui-color-enhance", + "files": [ + "https://git.mmaker.moe/mmaker/sd-webui-color-enhance" + ], + "install_type": "git-clone", + "description": "Node: Color Enhance, Color Blend. This is the same algorithm GIMP/GEGL uses for color enhancement. The gist of this implementation is that it converts the color space to CIELCh(ab) and normalizes the chroma (or [colorfulness](https://en.wikipedia.org/wiki/Colorfulness)] component. Original source can be found in the link below." + }, + { + "author": "modusCell", + "title": "Preset Dimensions", + "reference": "https://github.com/modusCell/ComfyUI-dimension-node-modusCell", + "files": [ + "https://github.com/modusCell/ComfyUI-dimension-node-modusCell" + ], + "install_type": "git-clone", + "description": "Simple node for sharing latent image size between nodes. Preset dimensions for SD and XL." + }, + { + "author": "aria1th", + "title": "ComfyUI-LogicUtils", + "reference": "https://github.com/aria1th/ComfyUI-LogicUtils", + "files": [ + "https://github.com/aria1th/ComfyUI-LogicUtils" + ], + "install_type": "git-clone", + "description": "Nodes:UniformRandomFloat..., RandomShuffleInt, YieldableIterator..., LogicGate..., Add..., MergeString, MemoryNode, ..." + }, + { + "author": "MitoshiroPJ", + "title": "ComfyUI Slothful Attention", + "reference": "https://github.com/MitoshiroPJ/comfyui_slothful_attention", + "files": [ + "https://github.com/MitoshiroPJ/comfyui_slothful_attention" + ], + "install_type": "git-clone", + "description": "This custom node allow controlling output without training. The reducing method is similar to [a/Spatial-Reduction Attention](https://paperswithcode.com/method/spatial-reduction-attention), but generating speed may not be increased on typical image sizes due to overheads. (In some cases, slightly slower)" + }, + { + "author": "brianfitzgerald", + "title": "StyleAligned for ComfyUI", + "reference": "https://github.com/brianfitzgerald/style_aligned_comfy", + "files": [ + "https://github.com/brianfitzgerald/style_aligned_comfy" + ], + "install_type": "git-clone", + "description": "Implementation of the [a/StyleAligned](https://style-aligned-gen.github.io/) paper for ComfyUI. This node allows you to apply a consistent style to all images in a batch; by default it will use the first image in the batch as the style reference, forcing all other images to be consistent with it." + }, + { + "author": "deroberon", + "title": "demofusion-comfyui", + "reference": "https://github.com/deroberon/demofusion-comfyui", + "files": [ + "https://github.com/deroberon/demofusion-comfyui" + ], + "install_type": "git-clone", + "description": "The Demofusion Custom Node is a wrapper that adapts the work and implementation of the [a/DemoFusion](https://ruoyidu.github.io/demofusion/demofusion.html) technique created and implemented by Ruoyi Du to the Comfyui environment." + }, + { + "author": "deroberon", + "title": "StableZero123-comfyui", + "reference": "https://github.com/deroberon/StableZero123-comfyui", + "files": [ + "https://github.com/deroberon/StableZero123-comfyui" + ], + "install_type": "git-clone", + "description": "StableZero123 is a node wrapper that uses the model and technique provided [here](https://github.com/SUDO-AI-3D/zero123plus/). It uses the Zero123plus model to generate 3D views using just one image." + }, + { + "author": "glifxyz", + "title": "ComfyUI-GlifNodes", + "reference": "https://github.com/glifxyz/ComfyUI-GlifNodes", + "files": [ + "https://github.com/glifxyz/ComfyUI-GlifNodes" + ], + "install_type": "git-clone", + "description": "Nodes:Consistency VAE Decoder." + }, + { + "author": "concarne000", + "title": "ConCarneNode", + "reference": "https://github.com/concarne000/ConCarneNode", + "files": [ + "https://github.com/concarne000/ConCarneNode" + ], + "install_type": "git-clone", + "description": "Nodes:Bing Image Grabber node for ComfyUI." + }, + { + "author": "Aegis72", + "title": "AegisFlow Utility Nodes", + "reference": "https://github.com/aegis72/aegisflow_utility_nodes", + "files": [ + "https://github.com/aegis72/aegisflow_utility_nodes" + ], + "install_type": "git-clone", + "description": "These nodes will be placed in comfyui/custom_nodes/aegisflow and contains the image passer (accepts an image as either wired or wirelessly, input and passes it through. Latent passer does the same for latents, and the Preprocessor chooser allows a passthrough image and 10 controlnets to be passed in AegisFlow Shima. The inputs on the Preprocessor chooser should not be renamed if you intend to accept image inputs wirelessly through UE nodes. It can be done, but the send node input regex for each controlnet preprocessor column must also be changed." + }, + { + "author": "Aegis72", + "title": "ComfyUI-styles-all", + "reference": "https://github.com/aegis72/comfyui-styles-all", + "files": [ + "https://github.com/aegis72/comfyui-styles-all" + ], + "install_type": "git-clone", + "description": "This is a straight clone of Azazeal04's all-in-one styler menu, which was removed from gh on Jan 21, 2024. I have made no changes to the files at all." + }, + { + "author": "glibsonoran", + "title": "Plush-for-ComfyUI", + "reference": "https://github.com/glibsonoran/Plush-for-ComfyUI", + "files": [ + "https://github.com/glibsonoran/Plush-for-ComfyUI" + ], + "install_type": "git-clone", + "description": "Nodes: Style Prompt, OAI Dall_e Image. Plush contains two OpenAI enabled nodes: Style Prompt: Takes your prompt and the art style you specify and generates a prompt from ChatGPT3 or 4 that Stable Diffusion can use to generate an image in that style. OAI Dall_e 3: Takes your prompt and parameters and produces a Dall_e3 image in ComfyUI." + }, + { + "author": "vienteck", + "title": "ComfyUI-Chat-GPT-Integration", + "reference": "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration", + "files": [ + "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration" + ], + "install_type": "git-clone", + + "description": "This extension is a reimagined version based on the [a/ComfyUI-QualityOfLifeSuit_Omar92](https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92) extension, and it supports integration with ChatGPT through the new OpenAI API.\nNOTE: See detailed installation instructions on the [a/repository](https://github.com/vienteck/ComfyUI-Chat-GPT-Integration)." + }, + { + "author": "MNeMoNiCuZ", + "title": "ComfyUI-mnemic-nodes", + "reference": "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes", + "files": [ + "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Save Text File" + }, + { + "author": "AI2lab", + "title": "comfyUI-tool-2lab", + "reference": "https://github.com/AI2lab/comfyUI-tool-2lab", + "files": [ + "https://github.com/AI2lab/comfyUI-tool-2lab" + ], + "install_type": "git-clone", + "description": "Integrate non-painting capabilities into comfyUI, including data, algorithms, video processing, large models, etc., to facilitate the construction of more powerful workflows." + }, + { + "author": "SpaceKendo", + "title": "Text to video for Stable Video Diffusion in ComfyUI", + "reference": "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid", + "files": [ + "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid" + ], + "install_type": "git-clone", + "description": "This is node replaces the init_image conditioning for the [a/Stable Video Diffusion](https://github.com/Stability-AI/generative-models) image to video model with text embeds, together with a conditioning frame. The conditioning frame is a set of latents." + }, + { + "author": "NimaNzrii", + "title": "comfyui-popup_preview", + "reference": "https://github.com/NimaNzrii/comfyui-popup_preview", + "files": [ + "https://github.com/NimaNzrii/comfyui-popup_preview" + ], + "install_type": "git-clone", + "description": "popup preview for comfyui" + }, + { + "author": "NimaNzrii", + "title": "comfyui-photoshop", + "reference": "https://github.com/NimaNzrii/comfyui-photoshop", + "files": [ + "https://github.com/NimaNzrii/comfyui-photoshop" + ], + "install_type": "git-clone", + "description": "Photoshop node inside of ComfyUi, send and get data from Photoshop" + }, + { + "author": "Rui", + "title": "RUI-Nodes", + "reference": "https://github.com/rui40000/RUI-Nodes", + "files": [ + "https://github.com/rui40000/RUI-Nodes" + ], + "install_type": "git-clone", + "description": "Rui's workflow-specific custom node, written using GPT." + }, + { + "author": "dmarx", + "title": "ComfyUI-Keyframed", + "reference": "https://github.com/dmarx/ComfyUI-Keyframed", + "files": [ + "https://github.com/dmarx/ComfyUI-Keyframed" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes to facilitate parameter/prompt keyframing using comfyui nodes for defining and manipulating parameter curves. Essentially provides a ComfyUI interface to the [a/keyframed](https://github.com/dmarx/keyframed) library." + }, + { + "author": "dmarx", + "title": "ComfyUI-AudioReactive", + "reference": "https://github.com/dmarx/ComfyUI-AudioReactive", + "files": [ + "https://github.com/dmarx/ComfyUI-AudioReactive" + ], + "install_type": "git-clone", + "description": "porting audioreactivity pipeline from vktrs to comfyui." + }, + { + "author": "TripleHeadedMonkey", + "title": "ComfyUI_MileHighStyler", + "reference": "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler", + "files": [ + "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler" + ], + "install_type": "git-clone", + "description": "This extension provides various SDXL Prompt Stylers. See: [a/youtube](https://youtu.be/WBHI-2uww7o?si=dijvDaUI4nmx4VkF)" + }, + { + "author": "BennyKok", + "title": "ComfyUI Deploy", + "reference": "https://github.com/BennyKok/comfyui-deploy", + "files": [ + "https://github.com/BennyKok/comfyui-deploy" + ], + "install_type": "git-clone", + "description": "Open source comfyui deployment platform, a vercel for generative workflow infra." + }, + { + "author": "florestefano1975", + "title": "comfyui-portrait-master", + "reference": "https://github.com/florestefano1975/comfyui-portrait-master", + "files": [ + "https://github.com/florestefano1975/comfyui-portrait-master" + ], + "install_type": "git-clone", + "description": "ComfyUI Portrait Master. A node designed to help AI image creators to generate prompts for human portraits." + }, + { + "author": "florestefano1975", + "title": "comfyui-prompt-composer", + "reference": "https://github.com/florestefano1975/comfyui-prompt-composer", + "files": [ + "https://github.com/florestefano1975/comfyui-prompt-composer" + ], + "install_type": "git-clone", + "description": "A suite of tools for prompt management. Combining nodes helps the user sequence strings for prompts, also creating logical groupings if necessary. Individual nodes can be chained together in any order." + }, + { + "author": "mozman", + "title": "ComfyUI_mozman_nodes", + "reference": "https://github.com/mozman/ComfyUI_mozman_nodes", + "files": [ + "https://github.com/mozman/ComfyUI_mozman_nodes" + ], + "install_type": "git-clone", + "description": "This extension provides styler nodes for SDXL.\n\nNOTE: Due to the dynamic nature of node name definitions, ComfyUI-Manager cannot recognize the node list from this extension. The Missing nodes and Badge features are not available for this extension." + }, + { + "author": "rcsaquino", + "title": "rcsaquino/comfyui-custom-nodes", + "reference": "https://github.com/rcsaquino/comfyui-custom-nodes", + "files": [ + "https://github.com/rcsaquino/comfyui-custom-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: VAE Processor, VAE Loader, Background Remover" + }, + { + "author": "rcfcu2000", + "title": "zhihuige-nodes-comfyui", + "reference": "https://github.com/rcfcu2000/zhihuige-nodes-comfyui", + "files": [ + "https://github.com/rcfcu2000/zhihuige-nodes-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes: Combine ZHGMasks, Cover ZHGMasks, ZHG FaceIndex, ZHG SaveImage, ZHG SmoothEdge, ZHG GetMaskArea, ..." + }, + { + "author": "IDGallagher", + "title": "IG Interpolation Nodes", + "reference": "https://github.com/IDGallagher/ComfyUI-IG-Nodes", + "files": [ + "https://github.com/IDGallagher/ComfyUI-IG-Nodes" + ], + "install_type": "git-clone", + "description": "Custom nodes to aid in the exploration of Latent Space" + }, + { + "author": "violet-chen", + "title": "comfyui-psd2png", + "reference": "https://github.com/violet-chen/comfyui-psd2png", + "files": [ + "https://github.com/violet-chen/comfyui-psd2png" + ], + "install_type": "git-clone", + "description": "Nodes: Psd2Png." + }, + { + "author": "lldacing", + "title": "comfyui-easyapi-nodes", + "reference": "https://github.com/lldacing/comfyui-easyapi-nodes", + "files": [ + "https://github.com/lldacing/comfyui-easyapi-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: Base64 To Image, Image To Base64, Load Image To Base64." + }, + { + "author": "CosmicLaca", + "title": "Primere nodes for ComfyUI", + "reference": "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes", + "files": [ + "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes" + ], + "install_type": "git-clone", + "description": "This extension provides various utility nodes. Inputs(prompt, styles, dynamic, merger, ...), Outputs(style pile), Dashboard(selectors, loader, switch, ...), Networks(LORA, Embedding, Hypernetwork), Visuals(visual selectors, )" + }, + { + "author": "RenderRift", + "title": "ComfyUI-RenderRiftNodes", + "reference": "https://github.com/RenderRift/ComfyUI-RenderRiftNodes", + "files": [ + "https://github.com/RenderRift/ComfyUI-RenderRiftNodes" + ], + "install_type": "git-clone", + "description": "Nodes:RR_Date_Folder_Format, RR_Image_Metadata_Overlay, RR_VideoPathMetaExtraction, RR_DisplayMetaOptions. This extension provides nodes designed to enhance the Animatediff workflow." + }, + { + "author": "OpenArt-AI", + "title": "ComfyUI Assistant", + "reference": "https://github.com/OpenArt-AI/ComfyUI-Assistant", + "files": [ + "https://github.com/OpenArt-AI/ComfyUI-Assistant" + ], + "install_type": "git-clone", + "description": "ComfyUI Assistant is your one stop plugin for everything you need to get started with comfy-ui. Now it provides useful courses, tutorials, and basic templates." + }, + { + "author": "ttulttul", + "title": "ComfyUI Iterative Mixing Nodes", + "reference": "https://github.com/ttulttul/ComfyUI-Iterative-Mixer", + "files": [ + "https://github.com/ttulttul/ComfyUI-Iterative-Mixer" + ], + "install_type": "git-clone", + "description": "Nodes: Iterative Mixing KSampler, Batch Unsampler, Iterative Mixing KSampler Advanced" + }, + { + "author": "ttulttul", + "title": "ComfyUI-Tensor-Operations", + "reference": "https://github.com/ttulttul/ComfyUI-Tensor-Operations", + "files": [ + "https://github.com/ttulttul/ComfyUI-Tensor-Operations" + ], + "install_type": "git-clone", + "description": "This repo contains nodes for ComfyUI that implement some helpful operations on tensors, such as normalization." + }, + { + "author": "jitcoder", + "title": "LoraInfo", + "reference": "https://github.com/jitcoder/lora-info", + "files": [ + "https://github.com/jitcoder/lora-info" + ], + "install_type": "git-clone", + "description": "Shows Lora information from CivitAI and outputs trigger words and example prompt" + }, + { + "author": "ceruleandeep", + "title": "ComfyUI LLaVA Captioner", + "reference": "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner", + "files": [ + "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner" + ], + "install_type": "git-clone", + "description": "A ComfyUI extension for chatting with your images. Runs on your own system, no external services used, no filter. Uses the [a/LLaVA multimodal LLM](https://llava-vl.github.io/) so you can give instructions or ask questions in natural language. It's maybe as smart as GPT3.5, and it can see." + }, + { + "author": "thecooltechguy", + "title": "ComfyUI-ComfyRun", + "reference": "https://github.com/thecooltechguy/ComfyUI-ComfyRun", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-ComfyRun" + ], + "install_type": "git-clone", + "description": "The easiest way to run & share any ComfyUI workflow [a/https://comfyrun.com](https://comfyrun.com)" + }, + { + "author": "thecooltechguy", + "title": "ComfyUI-MagicAnimate", + "reference": "https://github.com/thecooltechguy/ComfyUI-MagicAnimate", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-MagicAnimate" + ], + "install_type": "git-clone", + "description": "Easily use Magic Animate within ComfyUI!\n[w/WARN: This extension requires 15GB disk space.]" + }, + { + "author": "thecooltechguy", + "title": "ComfyUI-ComfyWorkflows", + "reference": "https://github.com/thecooltechguy/ComfyUI-ComfyWorkflows", + "files": [ + "https://github.com/thecooltechguy/ComfyUI-ComfyWorkflows" + ], + "install_type": "git-clone", + "description": "The best way to run, share, & discover thousands of ComfyUI workflows." + }, + { + "author": "styler00dollar", + "title": "ComfyUI-sudo-latent-upscale", + "reference": "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale", + "files": [ + "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale" + ], + "install_type": "git-clone", + "description": "Directly upscaling inside the latent space. Model was trained for SD1.5 and drawn content. Might add new architectures or update models at some point. This took heavy inspriration from [city96/SD-Latent-Upscaler](https://github.com/city96/SD-Latent-Upscaler) and [Ttl/ComfyUi_NNLatentUpscale](https://github.com/Ttl/ComfyUi_NNLatentUpscale). " + }, + { + "author": "styler00dollar", + "title": "ComfyUI-deepcache", + "reference": "https://github.com/styler00dollar/ComfyUI-deepcache", + "files": [ + "https://github.com/styler00dollar/ComfyUI-deepcache" + ], + "install_type": "git-clone", + "description": "This extension provides nodes for [a/DeepCache: Accelerating Diffusion Models for Free](https://arxiv.org/abs/2312.00858)\nNOTE:Original code can be found [a/here](https://gist.github.com/laksjdjf/435c512bc19636e9c9af4ee7bea9eb86). Full credit to laksjdjf for sharing the code. " + }, + { + "author": "HarroweD and quadmoon", + "title": "Harronode", + "reference": "https://github.com/NotHarroweD/Harronode", + "nodename_pattern": "Harronode", + "files": [ + "https://github.com/NotHarroweD/Harronode" + ], + "install_type": "git-clone", + "description": "Harronode is a custom node designed to build prompts easily for use with the Harrlogos SDXL LoRA. This Node simplifies the process of crafting prompts and makes all built in activation terms available at your fingertips." + }, + { + "author": "Limitex", + "title": "ComfyUI-Calculation", + "reference": "https://github.com/Limitex/ComfyUI-Calculation", + "files": [ + "https://github.com/Limitex/ComfyUI-Calculation" + ], + "install_type": "git-clone", + "description": "Nodes: Center Calculation. Improved Numerical Calculation for ComfyUI" + }, + { + "author": "Limitex", + "title": "ComfyUI-Diffusers", + "reference": "https://github.com/Limitex/ComfyUI-Diffusers", + "files": [ + "https://github.com/Limitex/ComfyUI-Diffusers" + ], + "install_type": "git-clone", + "description": "This extension enables the use of the diffuser pipeline in ComfyUI." + }, + { + "author": "edenartlab", + "title": "eden_comfy_pipelines", + "reference": "https://github.com/edenartlab/eden_comfy_pipelines", + "files": [ + "https://github.com/edenartlab/eden_comfy_pipelines" + ], + "install_type": "git-clone", + "description": "Nodes:CLIP Interrogator, ..." + }, + { + "author": "pkpk", + "title": "ComfyUI-SaveAVIF", + "reference": "https://github.com/pkpkTech/ComfyUI-SaveAVIF", + "files": [ + "https://github.com/pkpkTech/ComfyUI-SaveAVIF" + ], + "install_type": "git-clone", + "description": "A custom node on ComfyUI that saves images in AVIF format. Workflow can be loaded from images saved at this node." + }, + { + "author": "pkpkTech", + "title": "ComfyUI-ngrok", + "reference": "https://github.com/pkpkTech/ComfyUI-ngrok", + "files": [ + "https://github.com/pkpkTech/ComfyUI-ngrok" + ], + "install_type": "git-clone", + "description": "Use ngrok to allow external access to ComfyUI.\nNOTE: Need to manually modify a token inside the __init__.py file." + }, + { + "author": "pkpk", + "title": "ComfyUI-TemporaryLoader", + "reference": "https://github.com/pkpkTech/ComfyUI-TemporaryLoader", + "files": [ + "https://github.com/pkpkTech/ComfyUI-TemporaryLoader" + ], + "install_type": "git-clone", + "description": "This is a custom node of ComfyUI that downloads and loads models from the input URL. The model is temporarily downloaded into memory and not saved to storage.\nThis could be useful when trying out models or when using various models on machines with limited storage. Since the model is downloaded into memory, expect higher memory usage than usual." + }, + { + "author": "pkpkTech", + "title": "ComfyUI-SaveQueues", + "reference": "https://github.com/pkpkTech/ComfyUI-SaveQueues", + "files": [ + "https://github.com/pkpkTech/ComfyUI-SaveQueues" + ], + "install_type": "git-clone", + "description": "Add a button to the menu to save and load the running queue and the pending queues.\nThis is intended to be used when you want to exit ComfyUI with queues still remaining." + }, + { + "author": "Crystian", + "title": "Crystools", + "reference": "https://github.com/crystian/ComfyUI-Crystools", + "files": [ + "https://github.com/crystian/ComfyUI-Crystools" + ], + "install_type": "git-clone", + "description": "With this suit, you can see the resources monitor, progress bar & time elapsed, metadata and compare between two images, compare between two JSONs, show any value to console/display, pipes, and more!\nThis provides better nodes to load/save images, previews, etc, and see \"hidden\" data without loading a new workflow." + }, + { + "author": "Crystian", + "title": "Crystools-save", + "reference": "https://github.com/crystian/ComfyUI-Crystools-save", + "files": [ + "https://github.com/crystian/ComfyUI-Crystools-save" + ], + "install_type": "git-clone", + "description": "With this quality of life extension, you can save your workflow with a specific name and include additional details such as the author, a description, and the version (in metadata/json). Important: When you share your workflow (via png/json), others will be able to see your information!" + }, + { + "author": "Kangkang625", + "title": "ComfyUI-Paint-by-Example", + "reference": "https://github.com/Kangkang625/ComfyUI-paint-by-example", + "pip": ["diffusers"], + "files": [ + "https://github.com/Kangkang625/ComfyUI-paint-by-example" + ], + "install_type": "git-clone", + "description": "This repo is a simple implementation of [a/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example) based on its [a/huggingface pipeline](https://huggingface.co/Fantasy-Studio/Paint-by-Example)." + }, + { + "author": "54rt1n", + "title": "ComfyUI-DareMerge", + "reference": "https://github.com/54rt1n/ComfyUI-DareMerge", + "files": [ + "https://github.com/54rt1n/ComfyUI-DareMerge" + ], + "install_type": "git-clone", + "description": "Merge two checkpoint models by dare ties [a/(https://github.com/yule-BUAA/MergeLM)](https://github.com/yule-BUAA/MergeLM), sort of." + }, + { + "author": "an90ray", + "title": "ComfyUI_RErouter_CustomNodes", + "reference": "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes", + "files": [ + "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes" + ], + "install_type": "git-clone", + "description": "Nodes: RErouter, String (RE), Int (RE)" + }, + { + "author": "jesenzhang", + "title": "ComfyUI_StreamDiffusion", + "reference": "https://github.com/jesenzhang/ComfyUI_StreamDiffusion", + "files": [ + "https://github.com/jesenzhang/ComfyUI_StreamDiffusion" + ], + "install_type": "git-clone", + "description": "This is a simple implementation StreamDiffusion(A Pipeline-Level Solution for Real-Time Interactive Generation) for ComfyUI" + }, + { + "author": "ai-liam", + "title": "LiamUtil", + "reference": "https://github.com/ai-liam/comfyui_liam_util", + "files": [ + "https://github.com/ai-liam/comfyui_liam_util" + ], + "install_type": "git-clone", + "description": "Nodes: LiamLoadImage. This node provides the capability to load images from a URL." + }, + { + "author": "Ryuukeisyou", + "title": "comfyui_face_parsing", + "reference": "https://github.com/Ryuukeisyou/comfyui_face_parsing", + "files": [ + "https://github.com/Ryuukeisyou/comfyui_face_parsing" + ], + "install_type": "git-clone", + "description": "This is a set of custom nodes for ComfyUI. The nodes utilize the [a/face parsing model](https://huggingface.co/jonathandinu/face-parsing) to provide detailed segmantation of face. To improve face segmantation accuracy, [a/yolov8 face model](https://huggingface.co/Bingsu/adetailer/) is used to first extract face from an image. There are also auxiliary nodes for image and mask processing. A guided filter is also provided for skin smoothing." + }, + { + "author": "tocubed", + "title": "ComfyUI-AudioReactor", + "reference": "https://github.com/tocubed/ComfyUI-AudioReactor", + "files": [ + "https://github.com/tocubed/ComfyUI-AudioReactor" + ], + "install_type": "git-clone", + "description": "Nodes: Shadertoy, Load Audio (from Path), Audio Frame Transform (Shadertoy), Audio Frame Transform (Beats)" + }, + { + "author": "ntc-ai", + "title": "ComfyUI - Apply LoRA Stacker with DARE", + "reference": "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge", + "files": [ + "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge" + ], + "install_type": "git-clone", + "description": "An experiment about combining multiple LoRAs with [a/DARE](https://arxiv.org/pdf/2311.03099.pdf)" + }, + { + "author": "wwwins", + "title": "ComfyUI-Simple-Aspect-Ratio", + "reference": "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio", + "files": [ + "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio" + ], + "install_type": "git-clone", + "description": "Nodes:SimpleAspectRatio" + }, + { + "author": "ownimage", + "title": "ComfyUI-ownimage", + "reference": "https://github.com/ownimage/ComfyUI-ownimage", + "files": [ + "https://github.com/ownimage/ComfyUI-ownimage" + ], + "install_type": "git-clone", + "description": "Nodes:Caching Image Loader." + }, + { + "author": "Millyarde", + "title": "Pomfy - Photoshop and ComfyUI 2-way sync", + "reference": "https://github.com/Millyarde/Pomfy", + "files": [ + "https://github.com/Millyarde/Pomfy" + ], + "install_type": "git-clone", + "description": "Photoshop custom nodes inside of ComfyUi, send and get data via Photoshop UXP plugin for cross platform support" + }, + { + "author": "Ryuukeisyou", + "title": "comfyui_image_io_helpers", + "reference": "https://github.com/Ryuukeisyou/comfyui_image_io_helpers", + "files": [ + "https://github.com/Ryuukeisyou/comfyui_image_io_helpers" + ], + "install_type": "git-clone", + "description": "Nodes:ImageLoadFromBase64, ImageLoadByPath, ImageLoadAsMaskByPath, ImageSaveToPath, ImageSaveAsBase64." + }, + { + "author": "flowtyone", + "title": "ComfyUI-Flowty-LDSR", + "reference": "https://github.com/flowtyone/ComfyUI-Flowty-LDSR", + "files": [ + "https://github.com/flowtyone/ComfyUI-Flowty-LDSR" + ], + "install_type": "git-clone", + "description": "This is a custom node that lets you take advantage of Latent Diffusion Super Resolution (LDSR) models inside ComfyUI." + }, + { + "author": "massao000", + "title": "ComfyUI_aspect_ratios", + "reference": "https://github.com/massao000/ComfyUI_aspect_ratios", + "files": [ + "https://github.com/massao000/ComfyUI_aspect_ratios" + ], + "install_type": "git-clone", + "description": "Aspect ratio selector for ComfyUI based on [a/sd-webui-ar](https://github.com/alemelis/sd-webui-ar?tab=readme-ov-file)." + }, + { + "author": "SiliconFlow", + "title": "OneDiff Nodes", + "reference": "https://github.com/siliconflow/onediff_comfy_nodes", + "files": [ + "https://github.com/siliconflow/onediff_comfy_nodes" + ], + "install_type": "git-clone", + "description": "[a/Onediff](https://github.com/siliconflow/onediff) ComfyUI Nodes." + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-ArtGallery", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery" + ], + "install_type": "git-clone", + "description": "Prompt Visualization | Art Gallery\n[w/WARN: Installation requires 2GB of space, and it will involve a long download time.]" + }, + { + "author": "hinablue", + "title": "ComfyUI 3D Pose Editor", + "reference": "https://github.com/hinablue/ComfyUI_3dPoseEditor", + "files": [ + "https://github.com/hinablue/ComfyUI_3dPoseEditor" + ], + "install_type": "git-clone", + "description": "Nodes:3D Pose Editor" + }, + { + "author": "chaojie", + "title": "ComfyUI-DynamiCrafter", + "reference": "https://github.com/chaojie/ComfyUI-DynamiCrafter", + "files": [ + "https://github.com/chaojie/ComfyUI-DynamiCrafter" + ], + "install_type": "git-clone", + "description": "Better Dynamic, Higher Resolution, and Stronger Coherence!" + }, + { + "author": "chaojie", + "title": "ComfyUI-Panda3d", + "reference": "https://github.com/chaojie/ComfyUI-Panda3d", + "files": [ + "https://github.com/chaojie/ComfyUI-Panda3d" + ], + "install_type": "git-clone", + "description": "ComfyUI 3d engine" + }, + { + "author": "chaojie", + "title": "ComfyUI-Pymunk", + "reference": "https://github.com/chaojie/ComfyUI-Pymunk", + "files": [ + "https://github.com/chaojie/ComfyUI-Pymunk" + ], + "install_type": "git-clone", + "description": "Pymunk is a easy-to-use pythonic 2d physics library that can be used whenever you need 2d rigid body physics from Python" + }, + { + "author": "chaojie", + "title": "ComfyUI-MotionCtrl", + "reference": "https://github.com/chaojie/ComfyUI-MotionCtrl", + "files": [ + "https://github.com/chaojie/ComfyUI-MotionCtrl" + ], + "install_type": "git-clone", + "description": "Nodes: Download the weights of MotionCtrl [a/motionctrl.pth](https://huggingface.co/TencentARC/MotionCtrl/blob/main/motionctrl.pth) and put it to ComfyUI/models/checkpoints" + }, + { + "author": "chaojie", + "title": "ComfyUI-Motion-Vector-Extractor", + "reference": "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor", + "files": [ + "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor" + ], + "install_type": "git-clone", + "description": "Nodes: that we currently provide the package only for x86-64 linux, such as Ubuntu or Debian, and Python 3.8, 3.9, and 3.10." + }, + { + "author": "chaojie", + "title": "ComfyUI-MotionCtrl-SVD", + "reference": "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD", + "files": [ + "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD" + ], + "install_type": "git-clone", + "description": "Nodes: Download the weights of MotionCtrl-SVD [a/motionctrl_svd.ckpt](https://huggingface.co/TencentARC/MotionCtrl/blob/main/motionctrl_svd.ckpt) and put it to ComfyUI/models/checkpoints" + }, + { + "author": "chaojie", + "title": "ComfyUI-DragNUWA", + "reference": "https://github.com/chaojie/ComfyUI-DragNUWA", + "files": [ + "https://github.com/chaojie/ComfyUI-DragNUWA" + ], + "install_type": "git-clone", + "description": "Nodes: Download the weights of DragNUWA [a/drag_nuwa_svd.pth](https://drive.google.com/file/d/1Z4JOley0SJCb35kFF4PCc6N6P1ftfX4i/view) and put it to ComfyUI/models/checkpoints/drag_nuwa_svd.pth\n[w/Due to changes in the torch package and versions of many other packages, it may disrupt your installation environment.]" + }, + { + "author": "chaojie", + "title": "ComfyUI-Moore-AnimateAnyone", + "reference": "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone", + "files": [ + "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone" + ], + "install_type": "git-clone", + "description": "Nodes: Run python tools/download_weights.py first to download weights automatically" + }, + { + "author": "chaojie", + "title": "ComfyUI-I2VGEN-XL", + "reference": "https://github.com/chaojie/ComfyUI-I2VGEN-XL", + "files": [ + "https://github.com/chaojie/ComfyUI-I2VGEN-XL" + ], + "install_type": "git-clone", + "description": "This is an implementation of [a/i2vgen-xl](https://github.com/ali-vilab/i2vgen-xl)" + }, + { + "author": "chaojie", + "title": "ComfyUI-LightGlue", + "reference": "https://github.com/chaojie/ComfyUI-LightGlue", + "files": [ + "https://github.com/chaojie/ComfyUI-LightGlue" + ], + "install_type": "git-clone", + "description": "This is an ComfyUI implementation of LightGlue to generate motion brush" + }, + { + "author": "chaojie", + "title": "ComfyUI-RAFT", + "reference": "https://github.com/chaojie/ComfyUI-RAFT", + "files": [ + "https://github.com/chaojie/ComfyUI-RAFT" + ], + "install_type": "git-clone", + "description": "This is an ComfyUI implementation of RAFT to generate motion brush" + }, + { + "author": "alexopus", + "title": "ComfyUI Image Saver", + "reference": "https://github.com/alexopus/ComfyUI-Image-Saver", + "files": [ + "https://github.com/alexopus/ComfyUI-Image-Saver" + ], + "install_type": "git-clone", + "description": "Allows you to save images with their generation metadata compatible with Civitai. Works with png, jpeg and webp. Stores LoRAs, models and embeddings hashes for resource recognition." + }, + { + "author": "kft334", + "title": "Knodes", + "reference": "https://github.com/kft334/Knodes", + "files": [ + "https://github.com/kft334/Knodes" + ], + "install_type": "git-clone", + "description": "Nodes: Image(s) To Websocket (Base64), Load Image (Base64),Load Images (Base64)" + }, + { + "author": "MrForExample", + "title": "ComfyUI-3D-Pack", + "reference": "https://github.com/MrForExample/ComfyUI-3D-Pack", + "files": [ + "https://github.com/MrForExample/ComfyUI-3D-Pack" + ], + "nodename_pattern": "^\\[Comfy3D\\]", + "install_type": "git-clone", + "description": "An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, etc.)\nNOTE: Pre-built python wheels can be download from [a/https://github.com/remsky/ComfyUI3D-Assorted-Wheels](https://github.com/remsky/ComfyUI3D-Assorted-Wheels)" + }, + { + "author": "Mr.ForExample", + "title": "ComfyUI-AnimateAnyone-Evolved", + "reference": "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved", + "files": [ + "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved" + ], + "nodename_pattern": "^\\[AnimateAnyone\\]", + "install_type": "git-clone", + "description": "Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video.\nThe current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!🚀\n[w/The torch environment may be compromised due to version issues as some torch-related packages are being reinstalled.]" + }, + { + "author": "Hangover3832", + "title": "ComfyUI-Hangover-Nodes", + "reference": "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes", + "files": [ + "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes: MS kosmos-2 Interrogator, Save Image w/o Metadata, Image Scale Bounding Box. An implementation of Microsoft [a/kosmos-2](https://huggingface.co/microsoft/kosmos-2-patch14-224) image to text transformer." + }, + { + "author": "Hangover3832", + "title": "ComfyUI-Hangover-Moondream", + "reference": "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream", + "files": [ + "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream" + ], + "install_type": "git-clone", + "description": "Moondream is a lightweight multimodal large languge model.\nIMPORTANT:According to the creator, Moondream is for research purposes only, commercial use is not allowed!\n[w/WARN:Additional python code will be downloaded from huggingface and executed. You have to trust this creator if you want to use this node!]" + }, + { + "author": "tzwm", + "title": "ComfyUI Profiler", + "reference": "https://github.com/tzwm/comfyui-profiler", + "files": [ + "https://github.com/tzwm/comfyui-profiler" + ], + "install_type": "git-clone", + "description": "Calculate the execution time of all nodes." + }, + { + "author": "Daniel Lewis", + "title": "ComfyUI-Llama", + "reference": "https://github.com/daniel-lewis-ab/ComfyUI-Llama", + "files": [ + "https://github.com/daniel-lewis-ab/ComfyUI-Llama" + ], + "install_type": "git-clone", + "description": "This is a set of nodes to interact with llama-cpp-python" + }, + { + "author": "Daniel Lewis", + "title": "ComfyUI-TTS", + "reference": "https://github.com/daniel-lewis-ab/ComfyUI-TTS", + "files": [ + "https://github.com/daniel-lewis-ab/ComfyUI-TTS" + ], + "install_type": "git-clone", + "description": "Text To Speech (TTS) for ComfyUI" + }, + { + "author": "djbielejeski", + "title": "a-person-mask-generator", + "reference": "https://github.com/djbielejeski/a-person-mask-generator", + "files": [ + "https://github.com/djbielejeski/a-person-mask-generator" + ], + "install_type": "git-clone", + "description": "Extension for Automatic1111 and ComfyUI to automatically create masks for Background/Hair/Body/Face/Clothes in Img2Img" + }, + { + "author": "smagnetize", + "title": "kb-comfyui-nodes", + "reference": "https://github.com/smagnetize/kb-comfyui-nodes", + "files": [ + "https://github.com/smagnetize/kb-comfyui-nodes" + ], + "install_type": "git-clone", + "description": "Nodes:SingleImageDataUrlLoader" + }, + { + "author": "ginlov", + "title": "segment_to_mask_comfyui", + "reference": "https://github.com/ginlov/segment_to_mask_comfyui", + "files": [ + "https://github.com/ginlov/segment_to_mask_comfyui" + ], + "install_type": "git-clone", + "description": "Nodes:SegToMask" + }, + { + "author": "glowcone", + "title": "Load Image From Base64 URI", + "reference": "https://github.com/glowcone/comfyui-base64-to-image", + "files": [ + "https://github.com/glowcone/comfyui-base64-to-image" + ], + "install_type": "git-clone", + "description": "Nodes: LoadImageFromBase64. Loads an image and its transparency mask from a base64-encoded data URI for easy API connection." + }, + { + "author": "AInseven", + "title": "ComfyUI-fastblend", + "reference": "https://github.com/AInseven/ComfyUI-fastblend", + "files": [ + "https://github.com/AInseven/ComfyUI-fastblend" + ], + "install_type": "git-clone", + "description": "fastblend for comfyui, and other nodes that I write for video2video. rebatch image, my openpose" + }, + { + "author": "HebelHuber", + "title": "comfyui-enhanced-save-node", + "reference": "https://github.com/HebelHuber/comfyui-enhanced-save-node", + "files": [ + "https://github.com/HebelHuber/comfyui-enhanced-save-node" + ], + "install_type": "git-clone", + "description": "Nodes:Enhanced Save Node" + }, + { + "author": "LarryJane491", + "title": "Lora-Training-in-Comfy", + "reference": "https://github.com/LarryJane491/Lora-Training-in-Comfy", + "files": [ + "https://github.com/LarryJane491/Lora-Training-in-Comfy" + ], + "install_type": "git-clone", + "description": "If you see this message, your ComfyUI-Manager is outdated.\nRecent channel provides only the list of the latest nodes. If you want to find the complete node list, please go to the Default channel.\nMaking LoRA has never been easier!" + }, + { + "author": "LarryJane491", + "title": "Image-Captioning-in-ComfyUI", + "reference": "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI", + "files": [ + "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI" + ], + "install_type": "git-clone", + "description": "The LoRA Caption custom nodes, just like their name suggests, allow you to caption images so they are ready for LoRA training." + }, + { + "author": "Layer-norm", + "title": "Comfyui lama remover", + "reference": "https://github.com/Layer-norm/comfyui-lama-remover", + "files": [ + "https://github.com/Layer-norm/comfyui-lama-remover" + ], + "install_type": "git-clone", + "description": "A very simple ComfyUI node to remove item with mask." + }, + { + "author": "Taremin", + "title": "ComfyUI Prompt ExtraNetworks", + "reference": "https://github.com/Taremin/comfyui-prompt-extranetworks", + "files": [ + "https://github.com/Taremin/comfyui-prompt-extranetworks" + ], + "install_type": "git-clone", + "description": "Instead of LoraLoader or HypernetworkLoader, it receives a prompt and loads and applies LoRA or HN based on the specifications within the prompt. The main purpose of this custom node is to allow changes without reconnecting the LoraLoader node when the prompt is randomly altered, etc." + }, + { + "author": "Taremin", + "title": "ComfyUI String Tools", + "reference": "https://github.com/Taremin/comfyui-string-tools", + "files": [ + "https://github.com/Taremin/comfyui-string-tools" + ], + "install_type": "git-clone", + "description": " This extension provides the StringToolsConcat node, which concatenates multiple texts, and the StringToolsRandomChoice node, which selects one randomly from multiple texts." + }, + { + "author": "Taremin", + "title": "WebUI Monaco Prompt", + "reference": "https://github.com/Taremin/webui-monaco-prompt", + "files": [ + "https://github.com/Taremin/webui-monaco-prompt" + ], + "install_type": "git-clone", + "description": "Make it possible to edit the prompt using the Monaco Editor, an editor implementation used in VSCode.\nNOTE: This extension supports both ComfyUI and A1111 simultaneously." + }, + { + "author": "foxtrot-roger", + "title": "RF Nodes", + "reference": "https://github.com/foxtrot-roger/comfyui-rf-nodes", + "files": [ + "https://github.com/foxtrot-roger/comfyui-rf-nodes" + ], + "install_type": "git-clone", + "description": "A bunch of nodes that can be useful to manipulate primitive types (numbers, text, ...) Also some helpers to generate text and timestamps." + }, + { + "author": "abyz22", + "title": "image_control", + "reference": "https://github.com/abyz22/image_control", + "files": [ + "https://github.com/abyz22/image_control" + ], + "install_type": "git-clone", + "description": "Nodes:abyz22_Padding Image, abyz22_ImpactWildcardEncode, abyz22_setimageinfo, abyz22_SaveImage, abyz22_ImpactWildcardEncode_GetPrompt, abyz22_SetQueue, abyz22_drawmask, abyz22_FirstNonNull, abyz22_blendimages, abyz22_blend_onecolor. Please check workflow in [a/https://github.com/abyz22/image_control](https://github.com/abyz22/image_control)" + }, + { + "author": "HAL41", + "title": "ComfyUI aichemy nodes", + "reference": "https://github.com/HAL41/ComfyUI-aichemy-nodes", + "files": [ + "https://github.com/HAL41/ComfyUI-aichemy-nodes" + ], + "install_type": "git-clone", + "description": "Simple node to handle scaling of YOLOv8 segmentation masks" + }, + { + "author": "nkchocoai", + "title": "ComfyUI-SizeFromPresets", + "reference": "https://github.com/nkchocoai/ComfyUI-SizeFromPresets", + "files": [ + "https://github.com/nkchocoai/ComfyUI-SizeFromPresets" + ], + "install_type": "git-clone", + "description": "Add a node that outputs width and height of the size selected from the preset (.csv)." + }, + { + "author": "nkchocoai", + "title": "ComfyUI-PromptUtilities", + "reference": "https://github.com/nkchocoai/ComfyUI-PromptUtilities", + "files": [ + "https://github.com/nkchocoai/ComfyUI-PromptUtilities" + ], + "install_type": "git-clone", + "description": "Nodes: Format String, Join String List, Load Preset, Load Preset (Advanced), Const String, Const String (multi line). Add useful nodes related to prompt." + }, + { + "author": "nkchocoai", + "title": "ComfyUI-TextOnSegs", + "reference": "https://github.com/nkchocoai/ComfyUI-TextOnSegs", + "files": [ + "https://github.com/nkchocoai/ComfyUI-TextOnSegs" + ], + "install_type": "git-clone", + "description": "Add a node for drawing text with CR Draw Text of ComfyUI_Comfyroll_CustomNodes to the area of SEGS detected by Ultralytics Detector of ComfyUI-Impact-Pack." + }, + { + "author": "JaredTherriault", + "title": "ComfyUI-JNodes", + "reference": "https://github.com/JaredTherriault/ComfyUI-JNodes", + "files": [ + "https://github.com/JaredTherriault/ComfyUI-JNodes" + ], + "install_type": "git-clone", + "description": "python and web UX improvements for ComfyUI.\n[w/'DynamicPrompts.js' and 'EditAttention.js' from the core, along with 'ImageFeed.js' and 'favicon.js' from the custom scripts of pythongosssss, are not compatible. Therefore, manual deletion of these files is required to use this web extension.]" + }, + { + "author": "prozacgod", + "title": "ComfyUI Multi-Workspace", + "reference": "https://github.com/prozacgod/comfyui-pzc-multiworkspace", + "files": [ + "https://github.com/prozacgod/comfyui-pzc-multiworkspace" + ], + "install_type": "git-clone", + "description": "A simple, quick, and dirty implementation of multiple workspaces within ComfyUI." + }, + { + "author": "Siberpone", + "title": "Lazy Pony Prompter", + "reference": "https://github.com/Siberpone/lazy-pony-prompter", + "files": [ + "https://github.com/Siberpone/lazy-pony-prompter" + ], + "install_type": "git-clone", + "description": "A pony prompt helper extension for AUTOMATIC1111's Stable Diffusion Web UI and ComfyUI that utilizes the full power of your favorite booru query syntax. Currently supports [a/Derpibooru](https://derpibooru/org) and [a/E621](https://e621.net/)." + }, + { + "author": "chflame163", + "title": "ComfyUI Layer Style", + "reference": "https://github.com/chflame163/ComfyUI_LayerStyle", + "files": [ + "https://github.com/chflame163/ComfyUI_LayerStyle" + ], + "install_type": "git-clone", + "description": "A set of nodes for ComfyUI it generate image like Adobe Photoshop's Layer Style. the Drop Shadow is first completed node, and follow-up work is in progress." + }, + { + "author": "dave-palt", + "title": "comfyui_DSP_imagehelpers", + "reference": "https://github.com/dave-palt/comfyui_DSP_imagehelpers", + "files": [ + "https://github.com/dave-palt/comfyui_DSP_imagehelpers" + ], + "install_type": "git-clone", + "description": "Nodes: DSP Image Concat" + }, + { + "author": "Inzaniak", + "title": "Ranbooru for ComfyUI", + "reference": "https://github.com/Inzaniak/comfyui-ranbooru", + "files": [ + "https://github.com/Inzaniak/comfyui-ranbooru" + ], + "install_type": "git-clone", + "description": "Ranbooru is an extension for the comfyUI. The purpose of this extension is to add a node that gets a random set of tags from boorus pictures. This is mostly being used to help me test my checkpoints on a large variety of" + }, + { + "author": "miosp", + "title": "ComfyUI-FBCNN", + "reference": "https://github.com/Miosp/ComfyUI-FBCNN", + "files": [ + "https://github.com/Miosp/ComfyUI-FBCNN" + ], + "install_type": "git-clone", + "description": "A node for JPEG de-artifacting using [a/FBCNN](https://github.com/jiaxi-jiang/FBCNN)." + }, + { + "author": "JcandZero", + "title": "ComfyUI_GLM4Node", + "reference": "https://github.com/JcandZero/ComfyUI_GLM4Node", + "files": [ + "https://github.com/JcandZero/ComfyUI_GLM4Node" + ], + "install_type": "git-clone", + "description": "GLM4 Vision Integration" + }, + { + "author": "darkpixel", + "title": "DarkPrompts", + "reference": "https://github.com/darkpixel/darkprompts", + "files": [ + "https://github.com/darkpixel/darkprompts" + ], + "install_type": "git-clone", + "description": "Slightly better random prompt generation tools that allow combining and picking prompts from both file and text input sources." + }, + { + "author": "shiimizu", + "title": "ComfyUI PhotoMaker Plus", + "reference": "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus", + "files": [ + "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus" + ], + "install_type": "git-clone", + "description": "ComfyUI reference implementation for [a/PhotoMaker](https://github.com/TencentARC/PhotoMaker) models. [w/WARN:The repository name has been changed. For those who have previously installed it, please delete custom_nodes/ComfyUI-PhotoMaker from disk and reinstall this.]" + }, + { + "author": "Qais Malkawi", + "title": "ComfyUI-Qais-Helper", + "reference": "https://github.com/QaisMalkawi/ComfyUI-QaisHelper", + "files": [ + "https://github.com/QaisMalkawi/ComfyUI-QaisHelper" + ], + "install_type": "git-clone", + "description": "This Extension adds a few custom QOL nodes that ComfyUI lacks by default." + }, + { + "author": "longgui0318", + "title": "comfyui-mask-util", + "reference": "https://github.com/longgui0318/comfyui-mask-util", + "files": [ + "https://github.com/longgui0318/comfyui-mask-util" + ], + "install_type": "git-clone", + "description": "Nodes:Split Masks" + }, + { + "author": "DimaChaichan", + "title": "LAizypainter-Exporter-ComfyUI", + "reference": "https://github.com/DimaChaichan/LAizypainter-Exporter-ComfyUI", + "files": [ + "https://github.com/DimaChaichan/LAizypainter-Exporter-ComfyUI" + ], + "install_type": "git-clone", + "description": "This exporter is a plugin for ComfyUI, which can export tasks for [a/LAizypainter](https://github.com/DimaChaichan/LAizypainter).\nLAizypainter is a Photoshop plugin with which you can send tasks directly to a Stable Diffusion server. More information about a [a/Task](https://github.com/DimaChaichan/LAizypainter?tab=readme-ov-file#task)" + }, + { + "author": "adriflex", + "title": "ComfyUI_Blender_Texdiff", + "reference": "https://github.com/adriflex/ComfyUI_Blender_Texdiff", + "files": [ + "https://github.com/adriflex/ComfyUI_Blender_Texdiff" + ], + "install_type": "git-clone", + "description": "Nodes:Blender viewport color, Blender Viewport depth" + }, + { + "author": "Shraknard", + "title": "ComfyUI-Remover", + "reference": "https://github.com/Shraknard/ComfyUI-Remover", + "files": [ + "https://github.com/Shraknard/ComfyUI-Remover" + ], + "install_type": "git-clone", + "description": "Custom node for ComfyUI that makes parts of the image transparent (face, background...)" + }, + { + "author": "Abdullah Ozmantar", + "title": "InstaSwap Face Swap Node for ComfyUI", + "reference": "https://github.com/abdozmantar/ComfyUI-InstaSwap", + "files": [ + "https://github.com/abdozmantar/ComfyUI-InstaSwap" + ], + "install_type": "git-clone", + "description": "A quick and easy ComfyUI custom nodes for ultra-quality, lightning-speed face swapping of humans." + }, + { + "author": "FlyingFireCo", + "title": "tiled_ksampler", + "reference": "https://github.com/FlyingFireCo/tiled_ksampler", + "files": [ + "https://github.com/FlyingFireCo/tiled_ksampler" + ], + "install_type": "git-clone", + "description": "Nodes:Tiled KSampler, Asymmetric Tiled KSampler, Circular VAEDecode." + }, + { + "author": "Nlar", + "title": "ComfyUI_CartoonSegmentation", + "reference": "https://github.com/Nlar/ComfyUI_CartoonSegmentation", + "files": [ + "https://github.com/Nlar/ComfyUI_CartoonSegmentation" + ], + "install_type": "git-clone", + "description": "Front end ComfyUI nodes for CartoonSegmentation Based upon the work of the CartoonSegmentation repository this project will provide a front end to some of the features." + }, + { + "author": "godspede", + "title": "ComfyUI Substring", + "reference": "https://github.com/godspede/ComfyUI_Substring", + "files": [ + "https://github.com/godspede/ComfyUI_Substring" + ], + "install_type": "git-clone", + "description": "Just a simple substring node that takes text and length as input, and outputs the first length characters." + }, + { + "author": "gokayfem", + "title": "VLM_nodes", + "reference": "https://github.com/gokayfem/ComfyUI_VLM_nodes", + "files": [ + "https://github.com/gokayfem/ComfyUI_VLM_nodes" + ], + "install_type": "git-clone", + "description": "Nodes:VisionQuestionAnswering Node, PromptGenerate Node" + }, + { + "author": "Hiero207", + "title": "ComfyUI-Hiero-Nodes", + "reference": "https://github.com/Hiero207/ComfyUI-Hiero-Nodes", + "files": [ + "https://github.com/Hiero207/ComfyUI-Hiero-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Post to Discord w/ Webhook" + }, + { + "author": "azure-dragon-ai", + "title": "ComfyUI-ClipScore-Nodes", + "reference": "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes", + "files": [ + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:ImageScore, Loader, Image Processor, Real Image Processor, Fake Image Processor, Text Processor. ComfyUI Nodes for ClipScore" + }, + { + "author": "yuvraj108c", + "title": "ComfyUI Whisper", + "reference": "https://github.com/yuvraj108c/ComfyUI-Whisper", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Whisper" + ], + "install_type": "git-clone", + "description": "Transcribe audio and add subtitles to videos using Whisper in ComfyUI" + }, + { + "author": "blepping", + "title": "ComfyUI-bleh", + "reference": "https://github.com/blepping/ComfyUI-bleh", + "files": [ + "https://github.com/blepping/ComfyUI-bleh" + ], + "install_type": "git-clone", + "description": "Better TAESD previews, BlehHyperTile." + }, + { + "author": "blepping", + "title": "ComfyUI-sonar", + "reference": "https://github.com/blepping/ComfyUI-sonar", + "files": [ + "https://github.com/blepping/ComfyUI-sonar" + ], + "install_type": "git-clone", + "description": "A janky implementation of Sonar sampling (momentum-based sampling) for ComfyUI." + }, + { + "author": "JerryOrbachJr", + "title": "ComfyUI-RandomSize", + "reference": "https://github.com/JerryOrbachJr/ComfyUI-RandomSize", + "files": [ + "https://github.com/JerryOrbachJr/ComfyUI-RandomSize" + ], + "install_type": "git-clone", + "description": "A ComfyUI custom node that randomly selects a height and width pair from a list in a config file" + }, + { + "author": "jamal-alkharrat", + "title": "ComfyUI_rotate_image", + "reference": "https://github.com/jamal-alkharrat/ComfyUI_rotate_image", + "files": [ + "https://github.com/jamal-alkharrat/ComfyUI_rotate_image" + ], + "install_type": "git-clone", + "description": "ComfyUI Custom Node to Rotate Images, Img2Img node." + }, + { + "author": "mape", + "title": "mape's ComfyUI Helpers", + "reference": "https://github.com/mape/ComfyUI-mape-Helpers", + "files": [ + "https://github.com/mape/ComfyUI-mape-Helpers" + ], + "install_type": "git-clone", + "description": "Multi-monitor image preview, Variable Assigment/Wireless Nodes, Prompt Tweaking, Command Palette, Pinned favourite nodes, Node navigation, Fuzzy search, Node time tracking, Organizing and Error management. For more info visit: [a/https://comfyui.ma.pe/](https://comfyui.ma.pe/)" + }, + { + "author": "zhongpei", + "title": "Comfyui_image2prompt", + "reference": "https://github.com/zhongpei/Comfyui_image2prompt", + "files": [ + "https://github.com/zhongpei/Comfyui_image2prompt" + ], + "install_type": "git-clone", + "description": "Nodes:Image to Text, Loader Image to Text Model." + }, + { + "author": "zhongpei", + "title": "ComfyUI for InstructIR", + "reference": "https://github.com/zhongpei/ComfyUI-InstructIR", + "files": [ + "https://github.com/zhongpei/ComfyUI-InstructIR" + ], + "install_type": "git-clone", + "description": "Enhancing Image Restoration" + }, + { + "author": "Loewen-Hob", + "title": "Rembg Background Removal Node for ComfyUI", + "reference": "https://github.com/Loewen-Hob/rembg-comfyui-node-better", + "files": [ + "https://github.com/Loewen-Hob/rembg-comfyui-node-better" + ], + "install_type": "git-clone", + "description": "This custom node is based on the [a/rembg-comfyui-node](https://github.com/Jcd1230/rembg-comfyui-node) but provides additional functionality to select ONNX models." + }, + { + "author": "HaydenReeve", + "title": "ComfyUI Better Strings", + "reference": "https://github.com/HaydenReeve/ComfyUI-Better-Strings", + "files": [ + "https://github.com/HaydenReeve/ComfyUI-Better-Strings" + ], + "install_type": "git-clone", + "description": "Strings should be easy, and simple. This extension aims to provide a set of nodes that make working with strings in ComfyUI a little bit easier." + }, + { + "author": "StartHua", + "title": "ComfyUI_Seg_VITON", + "reference": "https://github.com/StartHua/ComfyUI_Seg_VITON", + "files": [ + "https://github.com/StartHua/ComfyUI_Seg_VITON" + ], + "install_type": "git-clone", + "description": "Nodes:segformer_clothes, segformer_agnostic, segformer_remove_bg, stabel_vition. Nodes for model dress up." + }, + { + "author": "StartHua", + "title": "Comfyui_joytag", + "reference": "https://github.com/StartHua/Comfyui_joytag", + "files": [ + "https://github.com/StartHua/Comfyui_joytag" + ], + "install_type": "git-clone", + "description": "JoyTag is a state of the art AI vision model for tagging images, with a focus on sex positivity and inclusivity. It uses the Danbooru tagging schema, but works across a wide range of images, from hand drawn to photographic.\nDownload the weight and put it under checkpoints: [a/https://huggingface.co/fancyfeast/joytag/tree/main](https://huggingface.co/fancyfeast/joytag/tree/main)" + }, + { + "author": "StartHua", + "title": "comfyui_segformer_b2_clothes", + "reference": "https://github.com/StartHua/Comfyui_segformer_b2_clothes", + "files": [ + "https://github.com/StartHua/Comfyui_segformer_b2_clothes" + ], + "install_type": "git-clone", + "description": "SegFormer model fine-tuned on ATR dataset for clothes segmentation but can also be used for human segmentation!\nDownload the weight and put it under checkpoints: [a/https://huggingface.co/mattmdjaga/segformer_b2_clothes](https://huggingface.co/mattmdjaga/segformer_b2_clothes)" + }, + { + "author": "ricklove", + "title": "comfyui-ricklove", + "reference": "https://github.com/ricklove/comfyui-ricklove", + "files": [ + "https://github.com/ricklove/comfyui-ricklove" + ], + "install_type": "git-clone", + "description": "Nodes: Image Crop and Resize by Mask, Image Uncrop, Image Shadow, Optical Flow (Dip), Warp Image with Flow, Image Threshold (Channels), Finetune Variable, Finetune Analyze, Finetune Analyze Batch, ... Misc ComfyUI nodes by Rick Love" + }, + { + "author": "nosiu", + "title": "ComfyUI InstantID Faceswapper", + "reference": "https://github.com/nosiu/comfyui-instantId-faceswap", + "files": [ + "https://github.com/nosiu/comfyui-instantId-faceswap" + ], + "install_type": "git-clone", + "description": "Implementation of [a/faceswap](https://github.com/nosiu/InstantID-faceswap/tree/main) based on [a/InstantID](https://github.com/InstantID/InstantID) for ComfyUI. Allows usage of [a/LCM Lora](https://huggingface.co/latent-consistency/lcm-lora-sdxl) which can produce good results in only a few generation steps.\nNOTE:Works ONLY with SDXL checkpoints." + }, + { + "author": "zhongpei", + "title": "ComfyUI for InstructIR", + "reference": "https://github.com/zhongpei/ComfyUI-InstructIR", + "files": [ + "https://github.com/zhongpei/ComfyUI-InstructIR" + ], + "install_type": "git-clone", + "description": "Enhancing Image Restoration. (ref:[a/InstructIR](https://github.com/mv-lab/InstructIR))" + }, + { + "author": "LyazS", + "title": "Anime Character Segmentation node for comfyui", + "reference": "https://github.com/LyazS/comfyui-anime-seg", + "files": [ + "https://github.com/LyazS/comfyui-anime-seg" + ], + "install_type": "git-clone", + "description": "A Anime Character Segmentation node for comfyui, based on [this hf space](https://huggingface.co/spaces/skytnt/anime-remove-background)." + }, + { + "author": "Chan-0312", + "title": "ComfyUI-IPAnimate", + "reference": "https://github.com/Chan-0312/ComfyUI-IPAnimate", + "files": [ + "https://github.com/Chan-0312/ComfyUI-IPAnimate" + ], + "install_type": "git-clone", + "description": "This is a project that generates videos frame by frame based on IPAdapter+ControlNet. Unlike [a/Steerable-motion](https://github.com/banodoco/Steerable-Motion), we do not rely on AnimateDiff. This decision is primarily due to the fact that the videos generated by AnimateDiff are often blurry. Through frame-by-frame control using IPAdapter+ControlNet, we can produce higher definition and more controllable videos." + }, + { + "author": "trumanwong", + "title": "ComfyUI-NSFW-Detection", + "reference": "https://github.com/trumanwong/ComfyUI-NSFW-Detection", + "files": [ + "https://github.com/trumanwong/ComfyUI-NSFW-Detection" + ], + "install_type": "git-clone", + "description": "An implementation of NSFW Detection for ComfyUI" + }, + { + "author": "TemryL", + "title": "ComfyS3", + "reference": "https://github.com/TemryL/ComfyS3", + "files": [ + "https://github.com/TemryL/ComfyS3" + ], + "install_type": "git-clone", + "description": "ComfyS3 seamlessly integrates with [a/Amazon S3](https://aws.amazon.com/en/s3/) in ComfyUI. This open-source project provides custom nodes for effortless loading and saving of images, videos, and checkpoint models directly from S3 buckets within the ComfyUI graph interface." + }, + { + "author": "davask", + "title": "MarasIT Nodes", + "reference": "https://github.com/davask/ComfyUI-MarasIT-Nodes", + "files": [ + "https://github.com/davask/ComfyUI-MarasIT-Nodes" + ], + "install_type": "git-clone", + "description": "This is a revised version of the Bus node from the [a/Was Node Suite](https://github.com/WASasquatch/was-node-suite-comfyui) to integrate more input/output." + }, + { + "author": "yffyhk", + "title": "comfyui_auto_danbooru", + "reference": "https://github.com/yffyhk/comfyui_auto_danbooru", + "files": [ + "https://github.com/yffyhk/comfyui_auto_danbooru" + ], + "install_type": "git-clone", + "description": "Nodes: Get Danbooru, Tag Encode" + }, + { + "author": "dfl", + "title": "comfyui-clip-with-break", + "reference": "https://github.com/dfl/comfyui-clip-with-break", + "files": [ + "https://github.com/dfl/comfyui-clip-with-break" + ], + "install_type": "git-clone", + "description": "Clip text encoder with BREAK formatting like A1111 (uses conditioning concat)" + }, + { + "author": "MarkoCa1", + "title": "ComfyUI_Segment_Mask", + "reference": "https://github.com/MarkoCa1/ComfyUI_Segment_Mask", + "files": [ + "https://github.com/MarkoCa1/ComfyUI_Segment_Mask" + ], + "install_type": "git-clone", + "description": "Mask cutout based on Segment Anything." + }, + { + "author": "antrobot", + "title": "antrobots ComfyUI Nodepack", + "reference": "https://github.com/antrobot1234/antrobots-comfyUI-nodepack", + "files": [ + "https://github.com/antrobot1234/antrobots-comfyUI-nodepack" + ], + "install_type": "git-clone", + "description": "A small node pack containing various things I felt like ought to be in base comfy-UI. Currently includes Some image handling nodes to help with inpainting, a version of KSampler (advanced) that allows for denoise, and a node that can swap it's inputs. Remember to make an issue if you experience any bugs or errors!" + }, + { + "author": "bilal-arikan", + "title": "ComfyUI_TextAssets", + "reference": "https://github.com/bilal-arikan/ComfyUI_TextAssets", + "files": [ + "https://github.com/bilal-arikan/ComfyUI_TextAssets" + ], + "install_type": "git-clone", + "description": "With this node you can upload text files to input folder from your local computer." + }, + { + "author": "kadirnar", + "title": "ComfyUI-Transformers", + "reference": "https://github.com/kadirnar/ComfyUI-Transformers", + "files": [ + "https://github.com/kadirnar/ComfyUI-Transformers" + ], + "install_type": "git-clone", + "description": "ComfyUI-Transformers is a cutting-edge project combining the power of computer vision and natural language processing to create intuitive and user-friendly interfaces. Our goal is to make technology more accessible and engaging." + }, + { + "author": "digitaljohn", + "title": "ComfyUI-ProPost", + "reference": "https://github.com/digitaljohn/comfyui-propost", + "files": [ + "https://github.com/digitaljohn/comfyui-propost" + ], + "install_type": "git-clone", + "description": "A set of custom ComfyUI nodes for performing basic post-processing effects including Film Grain and Vignette. These effects can help to take the edge off AI imagery and make them feel more natural." + }, + { + "author": "DonBaronFactory", + "title": "ComfyUI-Cre8it-Nodes", + "reference": "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes", + "files": [ + "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:CRE8IT Serial Prompter, CRE8IT Apply Serial Prompter, CRE8IT Image Sizer. A few simple nodes to facilitate working wiht ComfyUI Workflows" + }, + { + "author": "deforum", + "title": "Deforum Nodes", + "reference": "https://github.com/XmYx/deforum-comfy-nodes", + "files": [ + "https://github.com/XmYx/deforum-comfy-nodes" + ], + "install_type": "git-clone", + "description": "Official Deforum animation pipeline tools that provide a unique way to create frame-by-frame generative motion art." + }, + { + "author": "adbrasi", + "title": "ComfyUI-TrashNodes-DownloadHuggingface", + "reference": "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface", + "files": [ + "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface" + ], + "install_type": "git-clone", + "description": "ComfyUI-TrashNodes-DownloadHuggingface is a ComfyUI node designed to facilitate the download of models you have just trained and uploaded to Hugging Face. This node is particularly useful for users who employ Google Colab for training and need to quickly download their models for deployment." + }, + { + "author": "mbrostami", + "title": "ComfyUI-HF", + "reference": "https://github.com/mbrostami/ComfyUI-HF", + "files": [ + "https://github.com/mbrostami/ComfyUI-HF" + ], + "install_type": "git-clone", + "description": "ComfyUI Node to work with Hugging Face repositories" + }, + { + "author": "Billius-AI", + "title": "ComfyUI-Path-Helper", + "reference": "https://github.com/Billius-AI/ComfyUI-Path-Helper", + "files": [ + "https://github.com/Billius-AI/ComfyUI-Path-Helper" + ], + "install_type": "git-clone", + "description": "Nodes:Create Project Root, Add Folder, Add Folder Advanced, Add File Name Prefix, Add File Name Prefix Advanced, ShowPath" + }, + { + "author": "Franck-Demongin", + "title": "NX_PromptStyler", + "reference": "https://github.com/Franck-Demongin/NX_PromptStyler", + "files": [ + "https://github.com/Franck-Demongin/NX_PromptStyler" + ], + "install_type": "git-clone", + "description": "A custom node for ComfyUI to create a prompt based on a list of keywords saved in CSV files." + }, + { + "author": "xiaoxiaodesha", + "title": "hd-nodes-comfyui", + "reference": "https://github.com/xiaoxiaodesha/hd_node", + "files": [ + "https://github.com/xiaoxiaodesha/hd_node" + ], + "install_type": "git-clone", + "description": "Nodes:Combine HDMasks, Cover HDMasks, HD FaceIndex, HD SmoothEdge, HD GetMaskArea, HD Image Levels, HD Ultimate SD Upscale" + }, + { + "author": "ShmuelRonen", + "title": "ComfyUI-SVDResizer", + "reference": "https://github.com/ShmuelRonen/ComfyUI-SVDResizer", + "files": [ + "https://github.com/ShmuelRonen/ComfyUI-SVDResizer" + ], + "install_type": "git-clone", + "description": "SVDResizer is a helper for resizing the source image, according to the sizes enabled in Stable Video Diffusion. The rationale behind the possibility of changing the size of the image in steps between the ranges of 576 and 1024, is the use of the greatest common denominator of these two numbers which is 64. SVD is lenient with resizing that adheres to this rule, so the chance of coherent video that is not the standard size of 576X1024 is greater. It is advisable to keep the value 1024 constant and play with the second size to maintain the stability of the result." + }, + { + "author": "redhottensors", + "title": "ComfyUI-Prediction", + "reference": "https://github.com/redhottensors/ComfyUI-Prediction", + "files": [ + "https://github.com/redhottensors/ComfyUI-Prediction" + ], + "install_type": "git-clone", + "description": "Fully customizable Classifier Free Guidance for ComfyUI." + }, + { + "author": "Mamaaaamooooo", + "title": "Batch Rembg for ComfyUI", + "reference": "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes", + "files": [ + "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes" + ], + "install_type": "git-clone", + "description": "Remove background of plural images." + }, + { + "author": "jordoh", + "title": "ComfyUI Deepface", + "reference": "https://github.com/jordoh/ComfyUI-Deepface", + "files": [ + "https://github.com/jordoh/ComfyUI-Deepface" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes wrapping the [a/deepface](https://github.com/serengil/deepface) library." + }, + { + "author": "yuvraj108c", + "title": "ComfyUI-Pronodes", + "reference": "https://github.com/yuvraj108c/ComfyUI-Pronodes", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Pronodes" + ], + "install_type": "git-clone", + "description": "A collection of nice utility nodes for ComfyUI" + }, + { + "author": "GavChap", + "title": "ComfyUI-CascadeResolutions", + "reference": "https://github.com/GavChap/ComfyUI-CascadeResolutions", + "files": [ + "https://github.com/GavChap/ComfyUI-CascadeResolutions" + ], + "install_type": "git-clone", + "description": "Nodes:Cascade Resolutions" + }, + + + { + "author": "Ser-Hilary", + "title": "SDXL_sizing", + "reference": "https://github.com/Ser-Hilary/SDXL_sizing", + "files": [ + "https://github.com/Ser-Hilary/SDXL_sizing/raw/main/conditioning_sizing_for_SDXL.py" + ], + "install_type": "copy", + "description": "Nodes:sizing_node. Size calculation node related to image size in prompts supported by SDXL." + }, + { + "author": "ailex000", + "title": "Image Gallery", + "reference": "https://github.com/ailex000/ComfyUI-Extensions", + "js_path": "image-gallery", + "files": [ + "https://github.com/ailex000/ComfyUI-Extensions/raw/main/image-gallery/imageGallery.js" + ], + "install_type": "copy", + "description": "Custom javascript extensions for better UX for ComfyUI. Supported nodes: PreviewImage, SaveImage. Double click on image to open." + }, + { + "author": "rock-land", + "title": "graphNavigator", + "reference": "https://github.com/rock-land/graphNavigator", + "js_path": "graphNavigator", + "files": [ + "https://github.com/rock-land/graphNavigator/raw/main/graphNavigator/graphNavigator.js" + ], + "install_type": "copy", + "description": "ComfyUI Web Extension for saving views and navigating graphs." + }, + { + "author": "diffus3", + "title": "diffus3/ComfyUI-extensions", + "reference": "https://github.com/diffus3/ComfyUI-extensions", + "js_path": "diffus3", + "files": [ + "https://github.com/diffus3/ComfyUI-extensions/raw/main/multiReroute/multireroute.js", + "https://github.com/diffus3/ComfyUI-extensions/raw/main/setget/setget.js" + ], + "install_type": "copy", + "description": "Extensions: subgraph, setget, multiReroute" + }, + { + "author": "m957ymj75urz", + "title": "m957ymj75urz/ComfyUI-Custom-Nodes", + "reference": "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes", + "js_path": "m957ymj75urz", + "files": [ + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/clip-text-encode-split/clip_text_encode_split.py", + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/colors/colors.js" + ], + "install_type": "copy", + "description": "Nodes: RawText, RawTextCLIPEncode, RawTextCombine, RawTextReplace, Extension: m957ymj75urz.colors" + }, + { + "author": "Bikecicle", + "title": "Waveform Extensions", + "reference": "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions", + "files": [ + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_AudioManipulation.py", + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_VariationUtils.py" + ], + "install_type": "copy", + "description": "Some additional audio utilites for use on top of Sample Diffusion ComfyUI Extension" + }, + { + "author": "dawangraoming", + "title": "KSampler GPU", + "reference": "https://github.com/dawangraoming/ComfyUI_ksampler_gpu", + "files": [ + "https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py" + ], + "install_type": "copy", + "description": "KSampler is provided, based on GPU random noise" + }, + { + "author": "fitCorder", + "title": "fcSuite", + "reference": "https://github.com/fitCorder/fcSuite", + "files": [ + "https://github.com/fitCorder/fcSuite/raw/main/fcSuite.py" + ], + "install_type": "copy", + "description": "fcFloatMatic is a custom module, that when configured correctly will increment through the lines generating you loras at different strengths. The JSON file will load the config." + }, + { + "author": "lrzjason", + "title": "ComfyUIJasonNode", + "reference": "https://github.com/lrzjason/ComfyUIJasonNode", + "files": [ + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/SDXLMixSampler.py", + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/LatentByRatio.py", + "" + ], + "install_type": "copy", + "description": "Nodes:SDXLMixSampler, LatentByRatio" + }, + { + "author": "lordgasmic", + "title": "Wildcards", + "reference": "https://github.com/lordgasmic/ComfyUI-Wildcards", + "files": [ + "https://github.com/lordgasmic/ComfyUI-Wildcards/raw/master/wildcards.py" + ], + "install_type": "copy", + "description": "Nodes:CLIPTextEncodeWithWildcards. This wildcard node is a wildcard node that operates based on the seed." + }, + { + "author": "throttlekitty", + "title": "SDXLCustomAspectRatio", + "reference": "https://github.com/throttlekitty/SDXLCustomAspectRatio", + "files": [ + "https://raw.githubusercontent.com/throttlekitty/SDXLCustomAspectRatio/main/SDXLAspectRatio.py" + ], + "install_type": "copy", + "description": "A quick and easy ComfyUI custom node for setting SDXL-friendly aspect ratios." + }, + { + "author": "s1dlx", + "title": "comfy_meh", + "reference": "https://github.com/s1dlx/comfy_meh", + "files": [ + "https://github.com/s1dlx/comfy_meh/raw/main/meh.py" + ], + "install_type": "copy", + "description": "Advanced merging methods." + }, + { + "author": "tudal", + "title": "Hakkun-ComfyUI-nodes", + "reference": "https://github.com/tudal/Hakkun-ComfyUI-nodes", + "files": [ + "https://github.com/tudal/Hakkun-ComfyUI-nodes/raw/main/hakkun_nodes.py" + ], + "install_type": "copy", + "description": "Nodes: Prompt parser. ComfyUI extra nodes. Mostly prompt parsing." + }, + { + "author": "SadaleNet", + "title": "ComfyUI A1111-like Prompt Custom Node Solution", + "reference": "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI", + "files": [ + "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI/raw/master/custom_nodes/clip_text_encoder_a1111.py" + ], + "install_type": "copy", + "description": "Nodes: CLIPTextEncodeA1111, RerouteTextForCLIPTextEncodeA1111." + }, + { + "author": "wsippel", + "title": "SDXLResolutionPresets", + "reference": "https://github.com/wsippel/comfyui_ws", + "files": [ + "https://github.com/wsippel/comfyui_ws/raw/main/sdxl_utility.py" + ], + "install_type": "copy", + "description": "Nodes: SDXLResolutionPresets. Easy access to the officially supported resolutions, in both horizontal and vertical formats: 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640" + }, + { + "author": "nicolai256", + "title": "comfyUI_Nodes_nicolai256", + "reference": "https://github.com/nicolai256/comfyUI_Nodes_nicolai256", + "files": [ + "https://github.com/nicolai256/comfyUI_Nodes_nicolai256/raw/main/yugioh-presets.py" + ], + "install_type": "copy", + "description": "Nodes: yugioh_Presets. by Nicolai256 inspired by throttlekitty SDXLAspectRatio" + }, + { + "author": "Onierous", + "title": "QRNG_Node_ComfyUI", + "reference": "https://github.com/Onierous/QRNG_Node_ComfyUI", + "files": [ + "https://github.com/Onierous/QRNG_Node_ComfyUI/raw/main/qrng_node.py" + ], + "install_type": "copy", + "description": "Nodes: QRNG Node CSV. A node that takes in an array of random numbers from the ANU QRNG API and stores them locally for generating quantum random number noise_seeds in ComfyUI" + }, + { + "author": "ntdviet", + "title": "ntdviet/comfyui-ext", + "reference": "https://github.com/ntdviet/comfyui-ext", + "files": [ + "https://github.com/ntdviet/comfyui-ext/raw/main/custom_nodes/gcLatentTunnel/gcLatentTunnel.py" + ], + "install_type": "copy", + "description": "Nodes:LatentGarbageCollector. This ComfyUI custom node flushes the GPU cache and empty cuda interprocess memory. It's helpfull for low memory environment such as the free Google Colab, especially when the workflow VAE decode latents of the size above 1500x1500." + }, + { + "author": "alkemann", + "title": "alkemann nodes", + "reference": "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68", + "files": [ + "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68/raw/f9605be0b38d38d3e3a2988f89248ff557010076/alkemann.py" + ], + "install_type": "copy", + "description": "Nodes:Int to Text, Seed With Text, Save A1 Image." + }, + { + "author": "catscandrive", + "title": "Image loader with subfolders", + "reference": "https://github.com/catscandrive/comfyui-imagesubfolders", + "files": [ + "https://github.com/catscandrive/comfyui-imagesubfolders/raw/main/loadImageWithSubfolders.py" + ], + "install_type": "copy", + "description": "Adds an Image Loader node that also shows images in subfolders of the default input directory" + }, + { + "author": "Smuzzies", + "title": "Chatbox Overlay node for ComfyUI", + "reference": "https://github.com/Smuzzies/comfyui_chatbox_overlay", + "files": [ + "https://github.com/Smuzzies/comfyui_chatbox_overlay/raw/main/chatbox_overlay.py" + ], + "install_type": "copy", + "description": "Nodes: Chatbox Overlay. Custom node for ComfyUI to add a text box over a processed image before save node." + }, + { + "author": "CaptainGrock", + "title": "ComfyUIInvisibleWatermark", + "reference": "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark", + "files": [ + "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark/raw/main/Invisible%20Watermark.py" + ], + "install_type": "copy", + "description": "Nodes:Apply Invisible Watermark, Extract Watermark. Adds up to 12 characters encoded into an image that can be extracted." + }, + { + "author": "fearnworks", + "title": "Fearnworks Custom Nodes", + "reference": "https://github.com/fearnworks/ComfyUI_FearnworksNodes", + "files": [ + "https://github.com/fearnworks/ComfyUI_FearnworksNodes/raw/main/fw_nodes.py" + ], + "install_type": "copy", + "description": "A collection of ComfyUI nodes. These nodes are tailored for specific tasks, such as counting files in directories and sorting text segments based on token counts. Currently this is only tested on SDXL 1.0 models. An additional swich is needed to hand 1.x" + }, + { + "author": "LZC", + "title": "Hayo comfyui nodes", + "reference": "https://github.com/1shadow1/hayo_comfyui_nodes", + "files": [ + "https://github.com/1shadow1/hayo_comfyui_nodes/raw/main/LZCNodes.py" + ], + "install_type": "copy", + "description": "Nodes:tensor_trans_pil, Make Transparent mask, MergeImages, words_generatee, load_PIL image" + }, + { + "author": "celsojr2013", + "title": "ComfyUI SimpleTools Suit", + "reference": "https://github.com/celsojr2013/comfyui_simpletools", + "files": [ + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/google_translator.py", + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/parameters.py", + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/resolution_solver.py" + ], + "install_type": "copy", + "description": "Nodes:Simple Gooogle Translator Client, Simple Mustache Parameter Switcher, Simple Latent Resolution Solver." + }, + { + "author": "underclockeddev", + "title": "Preview Subselection Node for ComfyUI", + "reference": "https://github.com/underclockeddev/ComfyUI-PreviewSubselection-Node", + "files": [ + "https://github.com/underclockeddev/ComfyUI-PreviewSubselection-Node/raw/master/preview_subselection.py" + ], + "install_type": "copy", + "description": "A node which takes in x, y, width, height, total width, and total height, in order to accurately represent the area of an image which is covered by area-based conditioning." + }, + { + "author": "AshMartian", + "title": "Dir Gir", + "reference": "https://github.com/AshMartian/ComfyUI-DirGir", + "files": [ + "https://github.com/AshMartian/ComfyUI-DirGir/raw/main/dir_picker.py", + "https://github.com/AshMartian/ComfyUI-DirGir/raw/main/dir_loop.py" + ], + "install_type": "copy", + "description": "A collection of ComfyUI directory automation utility nodes. Directory Get It Right adds a GUI directory browser, and smart directory loop/iteration node that supports regex and file extension filtering." + }, + + { + "author": "theally", + "title": "TheAlly's Custom Nodes", + "reference": "https://civitai.com/models/19625?modelVersionId=23296", + "files": [ + "https://civitai.com/api/download/models/25114", + "https://civitai.com/api/download/models/24679", + "https://civitai.com/api/download/models/24154", + "https://civitai.com/api/download/models/23884", + "https://civitai.com/api/download/models/23649", + "https://civitai.com/api/download/models/23467", + "https://civitai.com/api/download/models/23296" + ], + "install_type": "unzip", + "description": "Custom nodes for ComfyUI by TheAlly." + }, + { + "author": "xss", + "title": "Custom Nodes by xss", + "reference": "https://civitai.com/models/24869/comfyui-custom-nodes-by-xss", + "files": [ + "https://civitai.com/api/download/models/32717", + "https://civitai.com/api/download/models/47776", + "https://civitai.com/api/download/models/29772", + "https://civitai.com/api/download/models/31618", + "https://civitai.com/api/download/models/31591", + "https://civitai.com/api/download/models/29773", + "https://civitai.com/api/download/models/29774", + "https://civitai.com/api/download/models/29755", + "https://civitai.com/api/download/models/29750" + ], + "install_type": "unzip", + "description": "Various image processing nodes." + }, + { + "author": "aimingfail", + "title": "Image2Halftone Node for ComfyUI", + "reference": "https://civitai.com/models/143293/image2halftone-node-for-comfyui", + "files": [ + "https://civitai.com/api/download/models/158997" + ], + "install_type": "unzip", + "description": "This is a node to convert an image into a CMYK Halftone dot image." + } + ] +} diff --git a/custom_nodes/ComfyUI-Manager/extension-node-map.json b/custom_nodes/ComfyUI-Manager/extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..06eb608e9a068fb14f976af858a10ae0dc43bec2 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/extension-node-map.json @@ -0,0 +1,8424 @@ +{ + "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68/raw/f9605be0b38d38d3e3a2988f89248ff557010076/alkemann.py": [ + [ + "Int to Text", + "Save A1 Image", + "Seed With Text" + ], + { + "title_aux": "alkemann nodes" + } + ], + "https://git.mmaker.moe/mmaker/sd-webui-color-enhance": [ + [ + "MMakerColorBlend", + "MMakerColorEnhance" + ], + { + "title_aux": "Color Enhance" + } + ], + "https://github.com/0xbitches/ComfyUI-LCM": [ + [ + "LCM_Sampler", + "LCM_Sampler_Advanced", + "LCM_img2img_Sampler", + "LCM_img2img_Sampler_Advanced" + ], + { + "title_aux": "Latent Consistency Model for ComfyUI" + } + ], + "https://github.com/1shadow1/hayo_comfyui_nodes/raw/main/LZCNodes.py": [ + [ + "LoadPILImages", + "MergeImages", + "make_transparentmask", + "tensor_trans_pil", + "words_generatee" + ], + { + "title_aux": "Hayo comfyui nodes" + } + ], + "https://github.com/42lux/ComfyUI-safety-checker": [ + [ + "Safety Checker" + ], + { + "title_aux": "ComfyUI-safety-checker" + } + ], + "https://github.com/54rt1n/ComfyUI-DareMerge": [ + [ + "DM_AdvancedDareModelMerger", + "DM_AdvancedModelMerger", + "DM_AttentionGradient", + "DM_BlockGradient", + "DM_BlockModelMerger", + "DM_DareClipMerger", + "DM_DareModelMergerBlock", + "DM_DareModelMergerElement", + "DM_DareModelMergerMBW", + "DM_GradientEdit", + "DM_GradientOperations", + "DM_GradientReporting", + "DM_InjectNoise", + "DM_LoRALoaderTags", + "DM_LoRAReporting", + "DM_MBWGradient", + "DM_MagnitudeMasker", + "DM_MaskEdit", + "DM_MaskOperations", + "DM_MaskReporting", + "DM_ModelReporting", + "DM_NormalizeModel", + "DM_QuadMasker", + "DM_ShellGradient", + "DM_SimpleMasker" + ], + { + "title_aux": "ComfyUI-DareMerge" + } + ], + "https://github.com/80sVectorz/ComfyUI-Static-Primitives": [ + [ + "FloatStaticPrimitive", + "IntStaticPrimitive", + "StringMlStaticPrimitive", + "StringStaticPrimitive" + ], + { + "title_aux": "ComfyUI-Static-Primitives" + } + ], + "https://github.com/AInseven/ComfyUI-fastblend": [ + [ + "FillDarkMask", + "InterpolateKeyFrame", + "MaskListcaptoBatch", + "MyOpenPoseNode", + "SmoothVideo", + "reBatchImage" + ], + { + "title_aux": "ComfyUI-fastblend" + } + ], + "https://github.com/AIrjen/OneButtonPrompt": [ + [ + "AutoNegativePrompt", + "CreatePromptVariant", + "OneButtonPreset", + "OneButtonPrompt", + "SavePromptToFile" + ], + { + "title_aux": "One Button Prompt" + } + ], + "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD": [ + [ + "APS_LatentBatch", + "APS_Seed", + "ContentMaskLatent", + "ControlNetScript", + "ControlnetUnit", + "GaussianLatentImage", + "GetConfig", + "LoadImageBase64", + "LoadImageWithMetaData", + "LoadLorasFromPrompt", + "MaskExpansion" + ], + { + "title_aux": "Comfy-Photoshop-SD" + } + ], + "https://github.com/AbyssYuan0/ComfyUI_BadgerTools": [ + [ + "ApplyMaskToImage-badger", + "CropImageByMask-badger", + "ExpandImageWithColor-badger", + "FindThickLinesFromCanny-badger", + "FloatToInt-badger", + "FloatToString-badger", + "FrameToVideo-badger", + "GarbageCollect-badger", + "GetColorFromBorder-badger", + "GetDirName-badger", + "GetUUID-badger", + "IdentifyBorderColorToMask-badger", + "IdentifyColorToMask-badger", + "ImageNormalization-badger", + "ImageOverlap-badger", + "ImageScaleToSide-badger", + "IntToString-badger", + "SegmentToMaskByPoint-badger", + "StringToFizz-badger", + "TextListToString-badger", + "TrimTransparentEdges-badger", + "VideoCutFromDir-badger", + "VideoToFrame-badger", + "deleteDir-badger", + "findCenterOfMask-badger", + "getImageSide-badger", + "getParentDir-badger", + "mkdir-badger" + ], + { + "title_aux": "ComfyUI_BadgerTools" + } + ], + "https://github.com/Acly/comfyui-inpaint-nodes": [ + [ + "INPAINT_ApplyFooocusInpaint", + "INPAINT_InpaintWithModel", + "INPAINT_LoadFooocusInpaint", + "INPAINT_LoadInpaintModel", + "INPAINT_MaskedBlur", + "INPAINT_MaskedFill", + "INPAINT_VAEEncodeInpaintConditioning" + ], + { + "title_aux": "ComfyUI Inpaint Nodes" + } + ], + "https://github.com/Acly/comfyui-tooling-nodes": [ + [ + "ETN_ApplyMaskToImage", + "ETN_CropImage", + "ETN_LoadImageBase64", + "ETN_LoadMaskBase64", + "ETN_SendImageWebSocket" + ], + { + "title_aux": "ComfyUI Nodes for External Tooling" + } + ], + "https://github.com/Amorano/Jovimetrix": [ + [], + { + "author": "amorano", + "description": "Webcams, GLSL shader, Media Streaming, Tick animation, Image manipulation,", + "nodename_pattern": " \\(jov\\)$", + "title": "Jovimetrix", + "title_aux": "Jovimetrix Composition Nodes" + } + ], + "https://github.com/ArtBot2023/CharacterFaceSwap": [ + [ + "Color Blend", + "Crop Face", + "Exclude Facial Feature", + "Generation Parameter Input", + "Generation Parameter Output", + "Image Full BBox", + "Load BiseNet", + "Load RetinaFace", + "Mask Contour", + "Segment Face", + "Uncrop Face" + ], + { + "title_aux": "Character Face Swap" + } + ], + "https://github.com/ArtVentureX/comfyui-animatediff": [ + [ + "AnimateDiffCombine", + "AnimateDiffLoraLoader", + "AnimateDiffModuleLoader", + "AnimateDiffSampler", + "AnimateDiffSlidingWindowOptions", + "ImageSizeAndBatchSize", + "LoadVideo" + ], + { + "title_aux": "AnimateDiff" + } + ], + "https://github.com/AustinMroz/ComfyUI-SpliceTools": [ + [ + "LogSigmas", + "RerangeSigmas", + "SpliceDenoised", + "SpliceLatents", + "TemporalSplice" + ], + { + "title_aux": "SpliceTools" + } + ], + "https://github.com/BadCafeCode/masquerade-nodes-comfyui": [ + [ + "Blur", + "Change Channel Count", + "Combine Masks", + "Constant Mask", + "Convert Color Space", + "Create QR Code", + "Create Rect Mask", + "Cut By Mask", + "Get Image Size", + "Image To Mask", + "Make Image Batch", + "Mask By Text", + "Mask Morphology", + "Mask To Region", + "MasqueradeIncrementer", + "Mix Color By Mask", + "Mix Images By Mask", + "Paste By Mask", + "Prune By Mask", + "Separate Mask Components", + "Unary Image Op", + "Unary Mask Op" + ], + { + "title_aux": "Masquerade Nodes" + } + ], + "https://github.com/Beinsezii/bsz-cui-extras": [ + [ + "BSZAbsoluteHires", + "BSZAspectHires", + "BSZColoredLatentImageXL", + "BSZCombinedHires", + "BSZHueChromaXL", + "BSZInjectionKSampler", + "BSZLatentDebug", + "BSZLatentFill", + "BSZLatentGradient", + "BSZLatentHSVAImage", + "BSZLatentOffsetXL", + "BSZLatentRGBAImage", + "BSZLatentbuster", + "BSZPixelbuster", + "BSZPixelbusterHelp", + "BSZPrincipledConditioning", + "BSZPrincipledSampler", + "BSZPrincipledScale", + "BSZStrangeResample" + ], + { + "title_aux": "bsz-cui-extras" + } + ], + "https://github.com/BennyKok/comfyui-deploy": [ + [ + "ComfyUIDeployExternalCheckpoint", + "ComfyUIDeployExternalImage", + "ComfyUIDeployExternalImageAlpha", + "ComfyUIDeployExternalLora", + "ComfyUIDeployExternalNumber", + "ComfyUIDeployExternalNumberInt", + "ComfyUIDeployExternalText" + ], + { + "author": "BennyKok", + "description": "", + "nickname": "Comfy Deploy", + "title": "comfyui-deploy", + "title_aux": "ComfyUI Deploy" + } + ], + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_AudioManipulation.py": [ + [ + "BatchJoinAudio", + "CutAudio", + "DuplicateAudio", + "JoinAudio", + "ResampleAudio", + "ReverseAudio", + "StretchAudio" + ], + { + "title_aux": "Waveform Extensions" + } + ], + "https://github.com/Billius-AI/ComfyUI-Path-Helper": [ + [ + "Add File Name Prefix", + "Add File Name Prefix Advanced", + "Add Folder", + "Add Folder Advanced", + "Create Project Root", + "Join Variables", + "Show Path", + "Show String" + ], + { + "title_aux": "ComfyUI-Path-Helper" + } + ], + "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb": [ + [ + "BNK_AddCLIPSDXLParams", + "BNK_AddCLIPSDXLRParams", + "BNK_CLIPTextEncodeAdvanced", + "BNK_CLIPTextEncodeSDXLAdvanced" + ], + { + "title_aux": "Advanced CLIP Text Encode" + } + ], + "https://github.com/BlenderNeko/ComfyUI_Cutoff": [ + [ + "BNK_CutoffBasePrompt", + "BNK_CutoffRegionsToConditioning", + "BNK_CutoffRegionsToConditioning_ADV", + "BNK_CutoffSetRegions" + ], + { + "title_aux": "ComfyUI Cutoff" + } + ], + "https://github.com/BlenderNeko/ComfyUI_Noise": [ + [ + "BNK_DuplicateBatchIndex", + "BNK_GetSigma", + "BNK_InjectNoise", + "BNK_NoisyLatentImage", + "BNK_SlerpLatent", + "BNK_Unsampler" + ], + { + "title_aux": "ComfyUI Noise" + } + ], + "https://github.com/BlenderNeko/ComfyUI_SeeCoder": [ + [ + "ConcatConditioning", + "SEECoderImageEncode" + ], + { + "title_aux": "SeeCoder [WIP]" + } + ], + "https://github.com/BlenderNeko/ComfyUI_TiledKSampler": [ + [ + "BNK_TiledKSampler", + "BNK_TiledKSamplerAdvanced" + ], + { + "title_aux": "Tiled sampling for ComfyUI" + } + ], + "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr": [ + [ + "CLIPIter", + "Dict2Model", + "GridImage", + "ImageBlend2", + "KSamplerOverrided", + "KSamplerSetting", + "KSamplerXYZ", + "LatentToHist", + "LatentToImage", + "ModelIter", + "RandomLatentImage", + "SaveStateDict", + "SaveText", + "StateDictLoader", + "StateDictMerger", + "StateDictMergerBlockWeighted", + "StateDictMergerBlockWeightedMulti", + "VAEDecodeBatched", + "VAEEncodeBatched", + "VAEIter" + ], + { + "title_aux": "ComfyUI-nodes-hnmr" + } + ], + "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark/raw/main/Invisible%20Watermark.py": [ + [ + "Apply Invisible Watermark", + "Extract Watermark" + ], + { + "title_aux": "ComfyUIInvisibleWatermark" + } + ], + "https://github.com/Chan-0312/ComfyUI-IPAnimate": [ + [ + "IPAdapterAnimate" + ], + { + "title_aux": "ComfyUI-IPAnimate" + } + ], + "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes": [ + [ + "ImageToPIL", + "LoadImageFromPath", + "PILToImage", + "PILToMask" + ], + { + "title_aux": "ComfyUI_Ib_CustomNodes" + } + ], + "https://github.com/Clybius/ComfyUI-Extra-Samplers": [ + [ + "SamplerCLYB_4M_SDE_Momentumized", + "SamplerCustomModelMixtureDuo", + "SamplerCustomNoise", + "SamplerCustomNoiseDuo", + "SamplerDPMPP_DualSDE_Momentumized", + "SamplerEulerAncestralDancing_Experimental", + "SamplerLCMCustom", + "SamplerRES_Momentumized", + "SamplerTTM" + ], + { + "title_aux": "ComfyUI Extra Samplers" + } + ], + "https://github.com/Clybius/ComfyUI-Latent-Modifiers": [ + [ + "Latent Diffusion Mega Modifier" + ], + { + "title_aux": "ComfyUI-Latent-Modifiers" + } + ], + "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes": [ + [ + "PrimereAnyDetailer", + "PrimereAnyOutput", + "PrimereCKPT", + "PrimereCKPTLoader", + "PrimereCLIPEncoder", + "PrimereClearPrompt", + "PrimereDynamicParser", + "PrimereEmbedding", + "PrimereEmbeddingHandler", + "PrimereEmbeddingKeywordMerger", + "PrimereEmotionsStyles", + "PrimereHypernetwork", + "PrimereImageSegments", + "PrimereKSampler", + "PrimereLCMSelector", + "PrimereLORA", + "PrimereLYCORIS", + "PrimereLatentNoise", + "PrimereLoraKeywordMerger", + "PrimereLoraStackMerger", + "PrimereLycorisKeywordMerger", + "PrimereLycorisStackMerger", + "PrimereMetaCollector", + "PrimereMetaRead", + "PrimereMetaSave", + "PrimereMidjourneyStyles", + "PrimereModelConceptSelector", + "PrimereModelKeyword", + "PrimereNetworkTagLoader", + "PrimerePrompt", + "PrimerePromptSwitch", + "PrimereRefinerPrompt", + "PrimereResolution", + "PrimereResolutionMultiplier", + "PrimereResolutionMultiplierMPX", + "PrimereSamplers", + "PrimereSamplersSteps", + "PrimereSeed", + "PrimereStepsCfg", + "PrimereStyleLoader", + "PrimereStylePile", + "PrimereTextOutput", + "PrimereVAE", + "PrimereVAELoader", + "PrimereVAESelector", + "PrimereVisualCKPT", + "PrimereVisualEmbedding", + "PrimereVisualHypernetwork", + "PrimereVisualLORA", + "PrimereVisualLYCORIS", + "PrimereVisualStyle" + ], + { + "title_aux": "Primere nodes for ComfyUI" + } + ], + "https://github.com/Danand/ComfyUI-ComfyCouple": [ + [ + "Attention couple", + "Comfy Couple" + ], + { + "author": "Rei D.", + "description": "If you want to draw two different characters together without blending their features, so you could try to check out this custom node.", + "nickname": "Danand", + "title": "Comfy Couple", + "title_aux": "ComfyUI-ComfyCouple" + } + ], + "https://github.com/Davemane42/ComfyUI_Dave_CustomNode": [ + [ + "ABGRemover", + "ConditioningStretch", + "ConditioningUpscale", + "MultiAreaConditioning", + "MultiLatentComposite" + ], + { + "title_aux": "Visual Area Conditioning / Latent composition" + } + ], + "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes": [ + [ + "ABSNode_DF", + "Absolute value", + "Ceil", + "CeilNode_DF", + "Conditioning area scale by ratio", + "ConditioningSetArea with tuples", + "ConditioningSetAreaEXT_DF", + "ConditioningSetArea_DF", + "CosNode_DF", + "Cosines", + "Divide", + "DivideNode_DF", + "EmptyLatentImage_DF", + "Float", + "Float debug print", + "Float2Tuple_DF", + "FloatDebugPrint_DF", + "FloatNode_DF", + "Floor", + "FloorNode_DF", + "Get image size", + "Get latent size", + "GetImageSize_DF", + "GetLatentSize_DF", + "Image scale by ratio", + "Image scale to side", + "ImageScale_Ratio_DF", + "ImageScale_Side_DF", + "Int debug print", + "Int to float", + "Int to tuple", + "Int2Float_DF", + "IntDebugPrint_DF", + "Integer", + "IntegerNode_DF", + "Latent Scale by ratio", + "Latent Scale to side", + "LatentComposite with tuples", + "LatentScale_Ratio_DF", + "LatentScale_Side_DF", + "MultilineStringNode_DF", + "Multiply", + "MultiplyNode_DF", + "PowNode_DF", + "Power", + "Random", + "RandomFloat_DF", + "SinNode_DF", + "Sinus", + "SqrtNode_DF", + "Square root", + "String debug print", + "StringNode_DF", + "Subtract", + "SubtractNode_DF", + "Sum", + "SumNode_DF", + "TanNode_DF", + "Tangent", + "Text", + "Text box", + "Tuple", + "Tuple debug print", + "Tuple multiply", + "Tuple swap", + "Tuple to floats", + "Tuple to ints", + "Tuple2Float_DF", + "TupleDebugPrint_DF", + "TupleNode_DF" + ], + { + "title_aux": "Derfuu_ComfyUI_ModdedNodes" + } + ], + "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes": [ + [ + "ApplySerialPrompter", + "ImageSizer", + "SerialPrompter" + ], + { + "author": "CRE8IT GmbH", + "description": "This extension offers various nodes.", + "nickname": "cre8Nodes", + "title": "cr8SerialPrompter", + "title_aux": "ComfyUI-Cre8it-Nodes" + } + ], + "https://github.com/Electrofried/ComfyUI-OpenAINode": [ + [ + "OpenAINode" + ], + { + "title_aux": "OpenAINode" + } + ], + "https://github.com/EllangoK/ComfyUI-post-processing-nodes": [ + [ + "ArithmeticBlend", + "AsciiArt", + "Blend", + "Blur", + "CannyEdgeMask", + "ChromaticAberration", + "ColorCorrect", + "ColorTint", + "Dissolve", + "Dither", + "DodgeAndBurn", + "FilmGrain", + "Glow", + "HSVThresholdMask", + "KMeansQuantize", + "KuwaharaBlur", + "Parabolize", + "PencilSketch", + "PixelSort", + "Pixelize", + "Quantize", + "Sharpen", + "SineWave", + "Solarize", + "Vignette" + ], + { + "title_aux": "ComfyUI-post-processing-nodes" + } + ], + "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG": [ + [ + "Automatic CFG", + "Automatic CFG channels multipliers" + ], + { + "title_aux": "ComfyUI-AutomaticCFG" + } + ], + "https://github.com/Extraltodeus/LoadLoraWithTags": [ + [ + "LoraLoaderTagsQuery" + ], + { + "title_aux": "LoadLoraWithTags" + } + ], + "https://github.com/Extraltodeus/noise_latent_perlinpinpin": [ + [ + "NoisyLatentPerlin" + ], + { + "title_aux": "noise latent perlinpinpin" + } + ], + "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler": [ + [ + "Get sigmas as float", + "Graph sigmas", + "Manual scheduler", + "Merge sigmas by average", + "Merge sigmas gradually", + "Multiply sigmas", + "Split and concatenate sigmas", + "The Golden Scheduler" + ], + { + "title_aux": "sigmas_tools_and_the_golden_scheduler" + } + ], + "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation": [ + [ + "AMT VFI", + "CAIN VFI", + "EISAI VFI", + "FILM VFI", + "FLAVR VFI", + "GMFSS Fortuna VFI", + "IFRNet VFI", + "IFUnet VFI", + "KSampler Gradually Adding More Denoise (efficient)", + "M2M VFI", + "Make Interpolation State List", + "RIFE VFI", + "STMFNet VFI", + "Sepconv VFI" + ], + { + "title_aux": "ComfyUI Frame Interpolation" + } + ], + "https://github.com/Fannovel16/ComfyUI-Loopchain": [ + [ + "EmptyLatentImageLoop", + "FolderToImageStorage", + "ImageStorageExportLoop", + "ImageStorageImport", + "ImageStorageReset", + "LatentStorageExportLoop", + "LatentStorageImport", + "LatentStorageReset" + ], + { + "title_aux": "ComfyUI Loopchain" + } + ], + "https://github.com/Fannovel16/ComfyUI-MotionDiff": [ + [ + "EmptyMotionData", + "ExportSMPLTo3DSoftware", + "MotionCLIPTextEncode", + "MotionDataVisualizer", + "MotionDiffLoader", + "MotionDiffSimpleSampler", + "RenderSMPLMesh", + "SMPLLoader", + "SaveSMPL", + "SmplifyMotionData" + ], + { + "title_aux": "ComfyUI MotionDiff" + } + ], + "https://github.com/Fannovel16/ComfyUI-Video-Matting": [ + [ + "BRIAAI Matting", + "Robust Video Matting" + ], + { + "title_aux": "ComfyUI-Video-Matting" + } + ], + "https://github.com/Fannovel16/comfyui_controlnet_aux": [ + [ + "AIO_Preprocessor", + "AnimalPosePreprocessor", + "AnimeFace_SemSegPreprocessor", + "AnimeLineArtPreprocessor", + "BAE-NormalMapPreprocessor", + "BinaryPreprocessor", + "CannyEdgePreprocessor", + "ColorPreprocessor", + "DWPreprocessor", + "DensePosePreprocessor", + "DepthAnythingPreprocessor", + "DiffusionEdge_Preprocessor", + "FacialPartColoringFromPoseKps", + "FakeScribblePreprocessor", + "HEDPreprocessor", + "HintImageEnchance", + "ImageGenResolutionFromImage", + "ImageGenResolutionFromLatent", + "ImageIntensityDetector", + "ImageLuminanceDetector", + "InpaintPreprocessor", + "LeReS-DepthMapPreprocessor", + "LineArtPreprocessor", + "LineartStandardPreprocessor", + "M-LSDPreprocessor", + "Manga2Anime_LineArt_Preprocessor", + "MaskOptFlow", + "MediaPipe-FaceMeshPreprocessor", + "MeshGraphormer-DepthMapPreprocessor", + "MiDaS-DepthMapPreprocessor", + "MiDaS-NormalMapPreprocessor", + "OneFormer-ADE20K-SemSegPreprocessor", + "OneFormer-COCO-SemSegPreprocessor", + "OpenposePreprocessor", + "PiDiNetPreprocessor", + "PixelPerfectResolution", + "SAMPreprocessor", + "SavePoseKpsAsJsonFile", + "ScribblePreprocessor", + "Scribble_XDoG_Preprocessor", + "SemSegPreprocessor", + "ShufflePreprocessor", + "TEEDPreprocessor", + "TilePreprocessor", + "UniFormer-SemSegPreprocessor", + "Unimatch_OptFlowPreprocessor", + "Zoe-DepthMapPreprocessor", + "Zoe_DepthAnythingPreprocessor" + ], + { + "author": "tstandley", + "title_aux": "ComfyUI's ControlNet Auxiliary Preprocessors" + } + ], + "https://github.com/Feidorian/feidorian-ComfyNodes": [ + [], + { + "nodename_pattern": "^Feidorian_", + "title_aux": "feidorian-ComfyNodes" + } + ], + "https://github.com/Fictiverse/ComfyUI_Fictiverse": [ + [ + "Add Noise to Image with Mask", + "Color correction", + "Displace Image with Depth", + "Displace Images with Mask", + "Zoom Image with Depth" + ], + { + "title_aux": "ComfyUI Fictiverse Nodes" + } + ], + "https://github.com/FizzleDorf/ComfyUI-AIT": [ + [ + "AIT_Unet_Loader", + "AIT_VAE_Encode_Loader" + ], + { + "title_aux": "ComfyUI-AIT" + } + ], + "https://github.com/FizzleDorf/ComfyUI_FizzNodes": [ + [ + "AbsCosWave", + "AbsSinWave", + "BatchGLIGENSchedule", + "BatchPromptSchedule", + "BatchPromptScheduleEncodeSDXL", + "BatchPromptScheduleLatentInput", + "BatchPromptScheduleNodeFlowEnd", + "BatchPromptScheduleSDXLLatentInput", + "BatchStringSchedule", + "BatchValueSchedule", + "BatchValueScheduleLatentInput", + "CalculateFrameOffset", + "ConcatStringSingle", + "CosWave", + "FizzFrame", + "FizzFrameConcatenate", + "ImageBatchFromValueSchedule", + "Init FizzFrame", + "InvCosWave", + "InvSinWave", + "Lerp", + "PromptSchedule", + "PromptScheduleEncodeSDXL", + "PromptScheduleNodeFlow", + "PromptScheduleNodeFlowEnd", + "SawtoothWave", + "SinWave", + "SquareWave", + "StringConcatenate", + "StringSchedule", + "TriangleWave", + "ValueSchedule", + "convertKeyframeKeysToBatchKeys" + ], + { + "title_aux": "FizzNodes" + } + ], + "https://github.com/FlyingFireCo/tiled_ksampler": [ + [ + "Asymmetric Tiled KSampler", + "Circular VAEDecode", + "Tiled KSampler" + ], + { + "title_aux": "tiled_ksampler" + } + ], + "https://github.com/Franck-Demongin/NX_PromptStyler": [ + [ + "NX_PromptStyler" + ], + { + "title_aux": "NX_PromptStyler" + } + ], + "https://github.com/GMapeSplat/ComfyUI_ezXY": [ + [ + "ConcatenateString", + "ItemFromDropdown", + "IterationDriver", + "JoinImages", + "LineToConsole", + "NumberFromList", + "NumbersToList", + "PlotImages", + "StringFromList", + "StringToLabel", + "StringsToList", + "ezMath", + "ezXY_AssemblePlot", + "ezXY_Driver" + ], + { + "title_aux": "ezXY scripts and nodes" + } + ], + "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes": [ + [ + "Danbooru (ID)", + "Danbooru (Random)", + "Random File From Path", + "Replace Strings", + "Simple Wildcards", + "Simple Wildcards (Dir.)", + "Wildcards Nodes" + ], + { + "title_aux": "ComfyUI-GTSuya-Nodes" + } + ], + "https://github.com/GavChap/ComfyUI-CascadeResolutions": [ + [ + "CascadeResolutions" + ], + { + "title_aux": "ComfyUI-CascadeResolutions" + } + ], + "https://github.com/Gourieff/comfyui-reactor-node": [ + [ + "ReActorFaceSwap", + "ReActorLoadFaceModel", + "ReActorRestoreFace", + "ReActorSaveFaceModel" + ], + { + "title_aux": "ReActor Node for ComfyUI" + } + ], + "https://github.com/HAL41/ComfyUI-aichemy-nodes": [ + [ + "aichemyYOLOv8Segmentation" + ], + { + "title_aux": "ComfyUI aichemy nodes" + } + ], + "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream": [ + [ + "Moondream Interrogator (NO COMMERCIAL USE)" + ], + { + "title_aux": "ComfyUI-Hangover-Moondream" + } + ], + "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes": [ + [ + "Image Scale Bounding Box", + "MS kosmos-2 Interrogator", + "Make Inpaint Model", + "Save Image w/o Metadata" + ], + { + "title_aux": "ComfyUI-Hangover-Nodes" + } + ], + "https://github.com/Haoming02/comfyui-diffusion-cg": [ + [ + "Normalization", + "NormalizationXL", + "Recenter", + "Recenter XL" + ], + { + "title_aux": "ComfyUI Diffusion Color Grading" + } + ], + "https://github.com/Haoming02/comfyui-floodgate": [ + [ + "FloodGate" + ], + { + "title_aux": "ComfyUI Floodgate" + } + ], + "https://github.com/HaydenReeve/ComfyUI-Better-Strings": [ + [ + "BetterString" + ], + { + "title_aux": "ComfyUI Better Strings" + } + ], + "https://github.com/HebelHuber/comfyui-enhanced-save-node": [ + [ + "EnhancedSaveNode" + ], + { + "title_aux": "comfyui-enhanced-save-node" + } + ], + "https://github.com/Hiero207/ComfyUI-Hiero-Nodes": [ + [ + "Post to Discord w/ Webhook" + ], + { + "author": "Hiero", + "description": "Just some nodes that I wanted/needed, so I made them.", + "nickname": "HNodes", + "title": "Hiero-Nodes", + "title_aux": "ComfyUI-Hiero-Nodes" + } + ], + "https://github.com/IDGallagher/ComfyUI-IG-Nodes": [ + [ + "IG Analyze SSIM", + "IG Cross Fade Images", + "IG Explorer", + "IG Float", + "IG Folder", + "IG Int", + "IG Load Image", + "IG Load Images", + "IG Multiply", + "IG Path Join", + "IG String", + "IG ZFill" + ], + { + "author": "IDGallagher", + "description": "Custom nodes to aid in the exploration of Latent Space", + "nickname": "IG Interpolation Nodes", + "title": "IG Interpolation Nodes", + "title_aux": "IG Interpolation Nodes" + } + ], + "https://github.com/Inzaniak/comfyui-ranbooru": [ + [ + "PromptBackground", + "PromptLimit", + "PromptMix", + "PromptRandomWeight", + "PromptRemove", + "Ranbooru", + "RanbooruURL", + "RandomPicturePath" + ], + { + "title_aux": "Ranbooru for ComfyUI" + } + ], + "https://github.com/JPS-GER/ComfyUI_JPS-Nodes": [ + [ + "Conditioning Switch (JPS)", + "ControlNet Switch (JPS)", + "Crop Image Pipe (JPS)", + "Crop Image Settings (JPS)", + "Crop Image Square (JPS)", + "Crop Image TargetSize (JPS)", + "CtrlNet CannyEdge Pipe (JPS)", + "CtrlNet CannyEdge Settings (JPS)", + "CtrlNet MiDaS Pipe (JPS)", + "CtrlNet MiDaS Settings (JPS)", + "CtrlNet OpenPose Pipe (JPS)", + "CtrlNet OpenPose Settings (JPS)", + "CtrlNet ZoeDepth Pipe (JPS)", + "CtrlNet ZoeDepth Settings (JPS)", + "Disable Enable Switch (JPS)", + "Enable Disable Switch (JPS)", + "Generation TXT IMG Settings (JPS)", + "Get Date Time String (JPS)", + "Get Image Size (JPS)", + "IP Adapter Settings (JPS)", + "IP Adapter Settings Pipe (JPS)", + "IP Adapter Single Settings (JPS)", + "IP Adapter Single Settings Pipe (JPS)", + "IPA Switch (JPS)", + "Image Switch (JPS)", + "ImageToImage Pipe (JPS)", + "ImageToImage Settings (JPS)", + "Images Masks MultiPipe (JPS)", + "Integer Switch (JPS)", + "Largest Int (JPS)", + "Latent Switch (JPS)", + "Lora Loader (JPS)", + "Mask Switch (JPS)", + "Model Switch (JPS)", + "Multiply Float Float (JPS)", + "Multiply Int Float (JPS)", + "Multiply Int Int (JPS)", + "Resolution Multiply (JPS)", + "Revision Settings (JPS)", + "Revision Settings Pipe (JPS)", + "SDXL Basic Settings (JPS)", + "SDXL Basic Settings Pipe (JPS)", + "SDXL Fundamentals MultiPipe (JPS)", + "SDXL Prompt Handling (JPS)", + "SDXL Prompt Handling Plus (JPS)", + "SDXL Prompt Styler (JPS)", + "SDXL Recommended Resolution Calc (JPS)", + "SDXL Resolutions (JPS)", + "Sampler Scheduler Settings (JPS)", + "Save Images Plus (JPS)", + "Substract Int Int (JPS)", + "Text Concatenate (JPS)", + "Text Prompt (JPS)", + "VAE Switch (JPS)" + ], + { + "author": "JPS", + "description": "Various nodes to handle SDXL Resolutions, SDXL Basic Settings, IP Adapter Settings, Revision Settings, SDXL Prompt Styler, Crop Image to Square, Crop Image to Target Size, Get Date-Time String, Resolution Multiply, Largest Integer, 5-to-1 Switches for Integer, Images, Latents, Conditioning, Model, VAE, ControlNet", + "nickname": "JPS Custom Nodes", + "title": "JPS Custom Nodes for ComfyUI", + "title_aux": "JPS Custom Nodes for ComfyUI" + } + ], + "https://github.com/JaredTherriault/ComfyUI-JNodes": [ + [ + "JNodes_AddOrSetMetaDataKey", + "JNodes_AnyToString", + "JNodes_AppendReversedFrames", + "JNodes_BooleanSelectorWithString", + "JNodes_CheckpointSelectorWithString", + "JNodes_GetOutputDirectory", + "JNodes_GetParameterFromList", + "JNodes_GetParameterGlobal", + "JNodes_GetTempDirectory", + "JNodes_ImageFormatSelector", + "JNodes_ImageSizeSelector", + "JNodes_LoadVideo", + "JNodes_LoraExtractor", + "JNodes_OutVideoInfo", + "JNodes_ParseDynamicPrompts", + "JNodes_ParseParametersToGlobalList", + "JNodes_ParseWildcards", + "JNodes_PromptBuilderSingleSubject", + "JNodes_RemoveCommentedText", + "JNodes_RemoveMetaDataKey", + "JNodes_RemoveParseableDataForInference", + "JNodes_SamplerSelectorWithString", + "JNodes_SaveImageWithOutput", + "JNodes_SaveVideo", + "JNodes_SchedulerSelectorWithString", + "JNodes_SearchAndReplace", + "JNodes_SearchAndReplaceFromFile", + "JNodes_SearchAndReplaceFromList", + "JNodes_SetNegativePromptInMetaData", + "JNodes_SetPositivePromptInMetaData", + "JNodes_SplitAndJoin", + "JNodes_StringLiteral", + "JNodes_SyncedStringLiteral", + "JNodes_TokenCounter", + "JNodes_TrimAndStrip", + "JNodes_UploadVideo", + "JNodes_VaeSelectorWithString" + ], + { + "title_aux": "ComfyUI-JNodes" + } + ], + "https://github.com/JcandZero/ComfyUI_GLM4Node": [ + [ + "GLM3_turbo_CHAT", + "GLM4_CHAT", + "GLM4_Vsion_IMGURL" + ], + { + "title_aux": "ComfyUI_GLM4Node" + } + ], + "https://github.com/Jcd1230/rembg-comfyui-node": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Rembg Background Removal Node for ComfyUI" + } + ], + "https://github.com/JerryOrbachJr/ComfyUI-RandomSize": [ + [ + "JOJR_RandomSize" + ], + { + "author": "JerryOrbachJr", + "description": "A ComfyUI custom node that randomly selects a height and width pair from a list in a config file", + "nickname": "Random Size", + "title": "Random Size", + "title_aux": "ComfyUI-RandomSize" + } + ], + "https://github.com/Jordach/comfy-plasma": [ + [ + "JDC_AutoContrast", + "JDC_BlendImages", + "JDC_BrownNoise", + "JDC_Contrast", + "JDC_EqualizeGrey", + "JDC_GaussianBlur", + "JDC_GreyNoise", + "JDC_Greyscale", + "JDC_ImageLoader", + "JDC_ImageLoaderMeta", + "JDC_PinkNoise", + "JDC_Plasma", + "JDC_PlasmaSampler", + "JDC_PowerImage", + "JDC_RandNoise", + "JDC_ResizeFactor" + ], + { + "title_aux": "comfy-plasma" + } + ], + "https://github.com/Kaharos94/ComfyUI-Saveaswebp": [ + [ + "Save_as_webp" + ], + { + "title_aux": "ComfyUI-Saveaswebp" + } + ], + "https://github.com/Kangkang625/ComfyUI-paint-by-example": [ + [ + "PaintbyExamplePipeLoader", + "PaintbyExampleSampler" + ], + { + "title_aux": "ComfyUI-Paint-by-Example" + } + ], + "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet": [ + [ + "ACN_AdvancedControlNetApply", + "ACN_ControlNetLoaderWithLoraAdvanced", + "ACN_DefaultUniversalWeights", + "ACN_SparseCtrlIndexMethodNode", + "ACN_SparseCtrlLoaderAdvanced", + "ACN_SparseCtrlMergedLoaderAdvanced", + "ACN_SparseCtrlRGBPreprocessor", + "ACN_SparseCtrlSpreadMethodNode", + "ControlNetLoaderAdvanced", + "CustomControlNetWeights", + "CustomT2IAdapterWeights", + "DiffControlNetLoaderAdvanced", + "LatentKeyframe", + "LatentKeyframeBatchedGroup", + "LatentKeyframeGroup", + "LatentKeyframeTiming", + "LoadImagesFromDirectory", + "ScaledSoftControlNetWeights", + "ScaledSoftMaskedUniversalWeights", + "SoftControlNetWeights", + "SoftT2IAdapterWeights", + "TimestepKeyframe" + ], + { + "title_aux": "ComfyUI-Advanced-ControlNet" + } + ], + "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved": [ + [ + "ADE_AdjustPEFullStretch", + "ADE_AdjustPEManual", + "ADE_AdjustPESweetspotStretch", + "ADE_AnimateDiffCombine", + "ADE_AnimateDiffKeyframe", + "ADE_AnimateDiffLoRALoader", + "ADE_AnimateDiffLoaderGen1", + "ADE_AnimateDiffLoaderV1Advanced", + "ADE_AnimateDiffLoaderWithContext", + "ADE_AnimateDiffModelSettings", + "ADE_AnimateDiffModelSettingsAdvancedAttnStrengths", + "ADE_AnimateDiffModelSettingsSimple", + "ADE_AnimateDiffModelSettings_Release", + "ADE_AnimateDiffSamplingSettings", + "ADE_AnimateDiffSettings", + "ADE_AnimateDiffUniformContextOptions", + "ADE_AnimateDiffUnload", + "ADE_ApplyAnimateDiffModel", + "ADE_ApplyAnimateDiffModelSimple", + "ADE_BatchedContextOptions", + "ADE_CustomCFG", + "ADE_CustomCFGKeyframe", + "ADE_EmptyLatentImageLarge", + "ADE_IterationOptsDefault", + "ADE_IterationOptsFreeInit", + "ADE_LoadAnimateDiffModel", + "ADE_LoopedUniformContextOptions", + "ADE_LoopedUniformViewOptions", + "ADE_MaskedLoadLora", + "ADE_MultivalDynamic", + "ADE_MultivalScaledMask", + "ADE_NoiseLayerAdd", + "ADE_NoiseLayerAddWeighted", + "ADE_NoiseLayerReplace", + "ADE_RawSigmaSchedule", + "ADE_SigmaSchedule", + "ADE_SigmaScheduleSplitAndCombine", + "ADE_SigmaScheduleWeightedAverage", + "ADE_SigmaScheduleWeightedAverageInterp", + "ADE_StandardStaticContextOptions", + "ADE_StandardStaticViewOptions", + "ADE_StandardUniformContextOptions", + "ADE_StandardUniformViewOptions", + "ADE_UseEvolvedSampling", + "ADE_ViewsOnlyContextOptions", + "AnimateDiffLoaderV1", + "CheckpointLoaderSimpleWithNoiseSelect" + ], + { + "title_aux": "AnimateDiff Evolved" + } + ], + "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite": [ + [ + "VHS_BatchManager", + "VHS_DuplicateImages", + "VHS_DuplicateLatents", + "VHS_DuplicateMasks", + "VHS_GetImageCount", + "VHS_GetLatentCount", + "VHS_GetMaskCount", + "VHS_LoadAudio", + "VHS_LoadImages", + "VHS_LoadImagesPath", + "VHS_LoadVideo", + "VHS_LoadVideoPath", + "VHS_MergeImages", + "VHS_MergeLatents", + "VHS_MergeMasks", + "VHS_PruneOutputs", + "VHS_SelectEveryNthImage", + "VHS_SelectEveryNthLatent", + "VHS_SelectEveryNthMask", + "VHS_SplitImages", + "VHS_SplitLatents", + "VHS_SplitMasks", + "VHS_VAEDecodeBatched", + "VHS_VAEEncodeBatched", + "VHS_VideoCombine" + ], + { + "title_aux": "ComfyUI-VideoHelperSuite" + } + ], + "https://github.com/LEv145/images-grid-comfy-plugin": [ + [ + "GridAnnotation", + "ImageCombine", + "ImagesGridByColumns", + "ImagesGridByRows", + "LatentCombine" + ], + { + "title_aux": "ImagesGrid" + } + ], + "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI": [ + [ + "LoRA Caption Load", + "LoRA Caption Save" + ], + { + "title_aux": "Image-Captioning-in-ComfyUI" + } + ], + "https://github.com/LarryJane491/Lora-Training-in-Comfy": [ + [ + "Lora Training in Comfy (Advanced)", + "Lora Training in ComfyUI", + "Tensorboard Access" + ], + { + "title_aux": "Lora-Training-in-Comfy" + } + ], + "https://github.com/Layer-norm/comfyui-lama-remover": [ + [ + "LamaRemover", + "LamaRemoverIMG" + ], + { + "title_aux": "Comfyui lama remover" + } + ], + "https://github.com/Lerc/canvas_tab": [ + [ + "Canvas_Tab", + "Send_To_Editor" + ], + { + "author": "Lerc", + "description": "This extension provides a full page image editor with mask support. There are two nodes, one to receive images from the editor and one to send images to the editor.", + "nickname": "Canvas Tab", + "title": "Canvas Tab", + "title_aux": "Canvas Tab" + } + ], + "https://github.com/Limitex/ComfyUI-Calculation": [ + [ + "CenterCalculation", + "CreateQRCode" + ], + { + "title_aux": "ComfyUI-Calculation" + } + ], + "https://github.com/Limitex/ComfyUI-Diffusers": [ + [ + "CreateIntListNode", + "DiffusersClipTextEncode", + "DiffusersModelMakeup", + "DiffusersPipelineLoader", + "DiffusersSampler", + "DiffusersSchedulerLoader", + "DiffusersVaeLoader", + "LcmLoraLoader", + "StreamDiffusionCreateStream", + "StreamDiffusionFastSampler", + "StreamDiffusionSampler", + "StreamDiffusionWarmup" + ], + { + "title_aux": "ComfyUI-Diffusers" + } + ], + "https://github.com/Loewen-Hob/rembg-comfyui-node-better": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Rembg Background Removal Node for ComfyUI" + } + ], + "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame": [ + [ + "BreakFrames", + "BreakGrid", + "GetKeyFrames", + "MakeGrid", + "RandomImageFromDir" + ], + { + "title_aux": "ComfyBreakAnim" + } + ], + "https://github.com/LonicaMewinsky/ComfyUI-RawSaver": [ + [ + "SaveTifImage" + ], + { + "title_aux": "ComfyUI-RawSaver" + } + ], + "https://github.com/LyazS/comfyui-anime-seg": [ + [ + "Anime Character Seg" + ], + { + "title_aux": "Anime Character Segmentation node for comfyui" + } + ], + "https://github.com/M1kep/ComfyLiterals": [ + [ + "Checkpoint", + "Float", + "Int", + "KepStringLiteral", + "Lora", + "Operation", + "String" + ], + { + "title_aux": "ComfyLiterals" + } + ], + "https://github.com/M1kep/ComfyUI-KepOpenAI": [ + [ + "KepOpenAI_ImageWithPrompt" + ], + { + "title_aux": "ComfyUI-KepOpenAI" + } + ], + "https://github.com/M1kep/ComfyUI-OtherVAEs": [ + [ + "OtherVAE_Taesd" + ], + { + "title_aux": "ComfyUI-OtherVAEs" + } + ], + "https://github.com/M1kep/Comfy_KepKitchenSink": [ + [ + "KepRotateImage" + ], + { + "title_aux": "Comfy_KepKitchenSink" + } + ], + "https://github.com/M1kep/Comfy_KepListStuff": [ + [ + "Empty Images", + "Image Overlay", + "ImageListLoader", + "Join Float Lists", + "Join Image Lists", + "KepStringList", + "KepStringListFromNewline", + "Kep_JoinListAny", + "Kep_RepeatList", + "Kep_ReverseList", + "Kep_VariableImageBuilder", + "List Length", + "Range(Num Steps) - Float", + "Range(Num Steps) - Int", + "Range(Step) - Float", + "Range(Step) - Int", + "Stack Images", + "XYAny", + "XYImage" + ], + { + "title_aux": "Comfy_KepListStuff" + } + ], + "https://github.com/M1kep/Comfy_KepMatteAnything": [ + [ + "MatteAnything_DinoBoxes", + "MatteAnything_GenerateVITMatte", + "MatteAnything_InitSamPredictor", + "MatteAnything_LoadDINO", + "MatteAnything_LoadVITMatteModel", + "MatteAnything_SAMLoader", + "MatteAnything_SAMMaskFromBoxes", + "MatteAnything_ToTrimap" + ], + { + "title_aux": "Comfy_KepMatteAnything" + } + ], + "https://github.com/M1kep/KepPromptLang": [ + [ + "Build Gif", + "Special CLIP Loader" + ], + { + "title_aux": "KepPromptLang" + } + ], + "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes": [ + [ + "Save Text File_mne" + ], + { + "title_aux": "ComfyUI-mnemic-nodes" + } + ], + "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Batch Rembg for ComfyUI" + } + ], + "https://github.com/ManglerFTW/ComfyI2I": [ + [ + "Color Transfer", + "Combine and Paste", + "Inpaint Segments", + "Mask Ops" + ], + { + "author": "ManglerFTW", + "title": "ComfyI2I", + "title_aux": "ComfyI2I" + } + ], + "https://github.com/MarkoCa1/ComfyUI_Segment_Mask": [ + [ + "AutomaticMask(segment anything)" + ], + { + "title_aux": "ComfyUI_Segment_Mask" + } + ], + "https://github.com/Miosp/ComfyUI-FBCNN": [ + [ + "JPEG artifacts removal FBCNN" + ], + { + "title_aux": "ComfyUI-FBCNN" + } + ], + "https://github.com/MitoshiroPJ/comfyui_slothful_attention": [ + [ + "NearSightedAttention", + "NearSightedAttentionSimple", + "NearSightedTile", + "SlothfulAttention" + ], + { + "title_aux": "ComfyUI Slothful Attention" + } + ], + "https://github.com/MrForExample/ComfyUI-3D-Pack": [ + [], + { + "nodename_pattern": "^\\[Comfy3D\\]", + "title_aux": "ComfyUI-3D-Pack" + } + ], + "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved": [ + [], + { + "nodename_pattern": "^\\[AnimateAnyone\\]", + "title_aux": "ComfyUI-AnimateAnyone-Evolved" + } + ], + "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite": [ + [ + "LatentTravel" + ], + { + "title_aux": "ComfyUI_TravelSuite" + } + ], + "https://github.com/NimaNzrii/comfyui-photoshop": [ + [ + "PhotoshopToComfyUI" + ], + { + "title_aux": "comfyui-photoshop" + } + ], + "https://github.com/NimaNzrii/comfyui-popup_preview": [ + [ + "PreviewPopup" + ], + { + "title_aux": "comfyui-popup_preview" + } + ], + "https://github.com/Niutonian/ComfyUi-NoodleWebcam": [ + [ + "WebcamNode" + ], + { + "title_aux": "ComfyUi-NoodleWebcam" + } + ], + "https://github.com/Nlar/ComfyUI_CartoonSegmentation": [ + [ + "AnimeSegmentation", + "KenBurnsConfigLoader", + "KenBurns_Processor", + "LoadImageFilename" + ], + { + "author": "Nels Larsen", + "description": "This extension offers a front end to the Cartoon Segmentation Project (https://github.com/CartoonSegmentation/CartoonSegmentation)", + "nickname": "CfyCS", + "title": "ComfyUI_CartoonSegmentation", + "title_aux": "ComfyUI_CartoonSegmentation" + } + ], + "https://github.com/NotHarroweD/Harronode": [ + [ + "Harronode" + ], + { + "author": "HarroweD and quadmoon (https://github.com/traugdor)", + "description": "This extension to ComfyUI will build a prompt for the Harrlogos LoRA for SDXL.", + "nickname": "Harronode", + "nodename_pattern": "Harronode", + "title": "Harrlogos Prompt Builder Node", + "title_aux": "Harronode" + } + ], + "https://github.com/Nourepide/ComfyUI-Allor": [ + [ + "AlphaChanelAdd", + "AlphaChanelAddByMask", + "AlphaChanelAsMask", + "AlphaChanelRemove", + "AlphaChanelRestore", + "ClipClamp", + "ClipVisionClamp", + "ClipVisionOutputClamp", + "ConditioningClamp", + "ControlNetClamp", + "GligenClamp", + "ImageBatchCopy", + "ImageBatchFork", + "ImageBatchGet", + "ImageBatchJoin", + "ImageBatchPermute", + "ImageBatchRemove", + "ImageClamp", + "ImageCompositeAbsolute", + "ImageCompositeAbsoluteByContainer", + "ImageCompositeRelative", + "ImageCompositeRelativeByContainer", + "ImageContainer", + "ImageContainerInheritanceAdd", + "ImageContainerInheritanceMax", + "ImageContainerInheritanceScale", + "ImageContainerInheritanceSum", + "ImageDrawArc", + "ImageDrawArcByContainer", + "ImageDrawChord", + "ImageDrawChordByContainer", + "ImageDrawEllipse", + "ImageDrawEllipseByContainer", + "ImageDrawLine", + "ImageDrawLineByContainer", + "ImageDrawPieslice", + "ImageDrawPiesliceByContainer", + "ImageDrawPolygon", + "ImageDrawRectangle", + "ImageDrawRectangleByContainer", + "ImageDrawRectangleRounded", + "ImageDrawRectangleRoundedByContainer", + "ImageEffectsAdjustment", + "ImageEffectsGrayscale", + "ImageEffectsLensBokeh", + "ImageEffectsLensChromaticAberration", + "ImageEffectsLensOpticAxis", + "ImageEffectsLensVignette", + "ImageEffectsLensZoomBurst", + "ImageEffectsNegative", + "ImageEffectsSepia", + "ImageFilterBilateralBlur", + "ImageFilterBlur", + "ImageFilterBoxBlur", + "ImageFilterContour", + "ImageFilterDetail", + "ImageFilterEdgeEnhance", + "ImageFilterEdgeEnhanceMore", + "ImageFilterEmboss", + "ImageFilterFindEdges", + "ImageFilterGaussianBlur", + "ImageFilterGaussianBlurAdvanced", + "ImageFilterMax", + "ImageFilterMedianBlur", + "ImageFilterMin", + "ImageFilterMode", + "ImageFilterRank", + "ImageFilterSharpen", + "ImageFilterSmooth", + "ImageFilterSmoothMore", + "ImageFilterStackBlur", + "ImageNoiseBeta", + "ImageNoiseBinomial", + "ImageNoiseBytes", + "ImageNoiseGaussian", + "ImageSegmentation", + "ImageSegmentationCustom", + "ImageSegmentationCustomAdvanced", + "ImageText", + "ImageTextMultiline", + "ImageTextMultilineOutlined", + "ImageTextOutlined", + "ImageTransformCropAbsolute", + "ImageTransformCropCorners", + "ImageTransformCropRelative", + "ImageTransformPaddingAbsolute", + "ImageTransformPaddingRelative", + "ImageTransformResizeAbsolute", + "ImageTransformResizeClip", + "ImageTransformResizeRelative", + "ImageTransformRotate", + "ImageTransformTranspose", + "LatentClamp", + "MaskClamp", + "ModelClamp", + "StyleModelClamp", + "UpscaleModelClamp", + "VaeClamp" + ], + { + "title_aux": "Allor Plugin" + } + ], + "https://github.com/Nuked88/ComfyUI-N-Nodes": [ + [ + "CLIPTextEncodeAdvancedNSuite [n-suite]", + "DynamicPrompt [n-suite]", + "Float Variable [n-suite]", + "FrameInterpolator [n-suite]", + "GPT Loader Simple [n-suite]", + "GPT Sampler [n-suite]", + "ImagePadForOutpaintAdvanced [n-suite]", + "Integer Variable [n-suite]", + "Llava Clip Loader [n-suite]", + "LoadFramesFromFolder [n-suite]", + "LoadVideo [n-suite]", + "SaveVideo [n-suite]", + "SetMetadataForSaveVideo [n-suite]", + "String Variable [n-suite]" + ], + { + "title_aux": "ComfyUI-N-Nodes" + } + ], + "https://github.com/Off-Live/ComfyUI-off-suite": [ + [ + "Apply CLAHE", + "Cached Image Load From URL", + "Crop Center wigh SEGS", + "Crop Center with SEGS", + "Dilate Mask for Each Face", + "GW Number Formatting", + "Image Crop Fit", + "Image Resize Fit", + "OFF SEGS to Image", + "Paste Face Segment to Image", + "Query Gender and Age", + "SEGS to Face Crop Data", + "Safe Mask to Image", + "VAE Encode For Inpaint V2", + "Watermarking" + ], + { + "title_aux": "ComfyUI-off-suite" + } + ], + "https://github.com/Onierous/QRNG_Node_ComfyUI/raw/main/qrng_node.py": [ + [ + "QRNG_Node_CSV" + ], + { + "title_aux": "QRNG_Node_ComfyUI" + } + ], + "https://github.com/PCMonsterx/ComfyUI-CSV-Loader": [ + [ + "Load Artists CSV", + "Load Artmovements CSV", + "Load Characters CSV", + "Load Colors CSV", + "Load Composition CSV", + "Load Lighting CSV", + "Load Negative CSV", + "Load Positive CSV", + "Load Settings CSV", + "Load Styles CSV" + ], + { + "title_aux": "ComfyUI-CSV-Loader" + } + ], + "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts": [ + [ + "CSVPromptsLoader", + "CombinePrompt", + "MultiLoraLoader", + "RandomPrompt" + ], + { + "title_aux": "ComfyUI-Malefish-Custom-Scripts" + } + ], + "https://github.com/Pfaeff/pfaeff-comfyui": [ + [ + "AstropulsePixelDetector", + "BackgroundRemover", + "ImagePadForBetterOutpaint", + "Inpainting", + "InpaintingPipelineLoader" + ], + { + "title_aux": "pfaeff-comfyui" + } + ], + "https://github.com/QaisMalkawi/ComfyUI-QaisHelper": [ + [ + "Bool Binary Operation", + "Bool Unary Operation", + "Item Debugger", + "Item Switch", + "Nearest SDXL Resolution", + "SDXL Resolution", + "Size Swapper" + ], + { + "title_aux": "ComfyUI-Qais-Helper" + } + ], + "https://github.com/RenderRift/ComfyUI-RenderRiftNodes": [ + [ + "AnalyseMetadata", + "DateIntegerNode", + "DisplayMetaOptions", + "LoadImageWithMeta", + "MetadataOverlayNode", + "VideoPathMetaExtraction" + ], + { + "title_aux": "ComfyUI-RenderRiftNodes" + } + ], + "https://github.com/Ryuukeisyou/comfyui_face_parsing": [ + [ + "BBoxListItemSelect(FaceParsing)", + "BBoxResize(FaceParsing)", + "ColorAdjust(FaceParsing)", + "FaceBBoxDetect(FaceParsing)", + "FaceBBoxDetectorLoader(FaceParsing)", + "FaceParse(FaceParsing)", + "FaceParsingModelLoader(FaceParsing)", + "FaceParsingProcessorLoader(FaceParsing)", + "FaceParsingResultsParser(FaceParsing)", + "GuidedFilter(FaceParsing)", + "ImageCropWithBBox(FaceParsing)", + "ImageInsertWithBBox(FaceParsing)", + "ImageListSelect(FaceParsing)", + "ImagePadWithBBox(FaceParsing)", + "ImageResizeCalculator(FaceParsing)", + "ImageResizeWithBBox(FaceParsing)", + "ImageSize(FaceParsing)", + "LatentCropWithBBox(FaceParsing)", + "LatentInsertWithBBox(FaceParsing)", + "LatentSize(FaceParsing)", + "MaskComposite(FaceParsing)", + "MaskListComposite(FaceParsing)", + "MaskListSelect(FaceParsing)", + "MaskToBBox(FaceParsing)", + "SkinDetectTraditional(FaceParsing)" + ], + { + "title_aux": "comfyui_face_parsing" + } + ], + "https://github.com/Ryuukeisyou/comfyui_image_io_helpers": [ + [ + "ImageLoadAsMaskByPath(ImageIOHelpers)", + "ImageLoadByPath(ImageIOHelpers)", + "ImageLoadFromBase64(ImageIOHelpers)", + "ImageSaveAsBase64(ImageIOHelpers)", + "ImageSaveToPath(ImageIOHelpers)" + ], + { + "title_aux": "comfyui_image_io_helpers" + } + ], + "https://github.com/SLAPaper/ComfyUI-Image-Selector": [ + [ + "ImageDuplicator", + "ImageSelector", + "LatentDuplicator", + "LatentSelector" + ], + { + "title_aux": "ComfyUI-Image-Selector" + } + ], + "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes": [ + [ + "MSSqlSelectNode", + "MSSqlTableNode" + ], + { + "title_aux": "LexMSDBNodes" + } + ], + "https://github.com/SOELexicon/ComfyUI-LexTools": [ + [ + "AgeClassifierNode", + "ArtOrHumanClassifierNode", + "DocumentClassificationNode", + "FoodCategoryClassifierNode", + "ImageAspectPadNode", + "ImageCaptioning", + "ImageFilterByFloatScoreNode", + "ImageFilterByIntScoreNode", + "ImageQualityScoreNode", + "ImageRankingNode", + "ImageScaleToMin", + "MD5ImageHashNode", + "SamplerPropertiesNode", + "ScoreConverterNode", + "SeedIncrementerNode", + "SegformerNode", + "SegformerNodeMasks", + "SegformerNodeMergeSegments", + "StepCfgIncrementNode" + ], + { + "title_aux": "ComfyUI-LexTools" + } + ], + "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI/raw/master/custom_nodes/clip_text_encoder_a1111.py": [ + [ + "CLIPTextEncodeA1111", + "RerouteTextForCLIPTextEncodeA1111" + ], + { + "title_aux": "ComfyUI A1111-like Prompt Custom Node Solution" + } + ], + "https://github.com/Scholar01/ComfyUI-Keyframe": [ + [ + "KeyframeApply", + "KeyframeInterpolationPart", + "KeyframePart" + ], + { + "title_aux": "SComfyUI-Keyframe" + } + ], + "https://github.com/SeargeDP/SeargeSDXL": [ + [ + "SeargeAdvancedParameters", + "SeargeCheckpointLoader", + "SeargeConditionMixing", + "SeargeConditioningMuxer2", + "SeargeConditioningMuxer5", + "SeargeConditioningParameters", + "SeargeControlnetAdapterV2", + "SeargeControlnetModels", + "SeargeCustomAfterUpscaling", + "SeargeCustomAfterVaeDecode", + "SeargeCustomPromptMode", + "SeargeDebugPrinter", + "SeargeEnablerInputs", + "SeargeFloatConstant", + "SeargeFloatMath", + "SeargeFloatPair", + "SeargeFreeU", + "SeargeGenerated1", + "SeargeGenerationParameters", + "SeargeHighResolution", + "SeargeImage2ImageAndInpainting", + "SeargeImageAdapterV2", + "SeargeImageSave", + "SeargeImageSaving", + "SeargeInput1", + "SeargeInput2", + "SeargeInput3", + "SeargeInput4", + "SeargeInput5", + "SeargeInput6", + "SeargeInput7", + "SeargeIntegerConstant", + "SeargeIntegerMath", + "SeargeIntegerPair", + "SeargeIntegerScaler", + "SeargeLatentMuxer3", + "SeargeLoraLoader", + "SeargeLoras", + "SeargeMagicBox", + "SeargeModelSelector", + "SeargeOperatingMode", + "SeargeOutput1", + "SeargeOutput2", + "SeargeOutput3", + "SeargeOutput4", + "SeargeOutput5", + "SeargeOutput6", + "SeargeOutput7", + "SeargeParameterProcessor", + "SeargePipelineStart", + "SeargePipelineTerminator", + "SeargePreviewImage", + "SeargePromptAdapterV2", + "SeargePromptCombiner", + "SeargePromptStyles", + "SeargePromptText", + "SeargeSDXLBasePromptEncoder", + "SeargeSDXLImage2ImageSampler", + "SeargeSDXLImage2ImageSampler2", + "SeargeSDXLPromptEncoder", + "SeargeSDXLRefinerPromptEncoder", + "SeargeSDXLSampler", + "SeargeSDXLSampler2", + "SeargeSDXLSamplerV3", + "SeargeSamplerAdvanced", + "SeargeSamplerInputs", + "SeargeSaveFolderInputs", + "SeargeSeparator", + "SeargeStylePreprocessor", + "SeargeTextInputV2", + "SeargeUpscaleModelLoader", + "SeargeUpscaleModels", + "SeargeVAELoader" + ], + { + "title_aux": "SeargeSDXL" + } + ], + "https://github.com/Ser-Hilary/SDXL_sizing/raw/main/conditioning_sizing_for_SDXL.py": [ + [ + "get_aspect_from_image", + "get_aspect_from_ints", + "sizing_node", + "sizing_node_basic", + "sizing_node_unparsed" + ], + { + "title_aux": "SDXL_sizing" + } + ], + "https://github.com/ShmuelRonen/ComfyUI-SVDResizer": [ + [ + "SVDRsizer" + ], + { + "title_aux": "ComfyUI-SVDResizer" + } + ], + "https://github.com/Shraknard/ComfyUI-Remover": [ + [ + "Remover" + ], + { + "title_aux": "ComfyUI-Remover" + } + ], + "https://github.com/Siberpone/lazy-pony-prompter": [ + [ + "LPP_Deleter", + "LPP_Derpibooru", + "LPP_E621", + "LPP_Loader_Derpibooru", + "LPP_Loader_E621", + "LPP_Saver" + ], + { + "title_aux": "Lazy Pony Prompter" + } + ], + "https://github.com/Smuzzies/comfyui_chatbox_overlay/raw/main/chatbox_overlay.py": [ + [ + "Chatbox Overlay" + ], + { + "title_aux": "Chatbox Overlay node for ComfyUI" + } + ], + "https://github.com/SoftMeng/ComfyUI_Mexx_Poster": [ + [ + "ComfyUI_Mexx_Poster" + ], + { + "title_aux": "ComfyUI_Mexx_Poster" + } + ], + "https://github.com/SoftMeng/ComfyUI_Mexx_Styler": [ + [ + "MexxSDXLPromptStyler", + "MexxSDXLPromptStylerAdvanced" + ], + { + "title_aux": "ComfyUI_Mexx_Styler" + } + ], + "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid": [ + [ + "SVD_txt2vid_ConditioningwithLatent" + ], + { + "title_aux": "Text to video for Stable Video Diffusion in ComfyUI" + } + ], + "https://github.com/Stability-AI/stability-ComfyUI-nodes": [ + [ + "ColorBlend", + "ControlLoraSave", + "GetImageSize" + ], + { + "title_aux": "stability-ComfyUI-nodes" + } + ], + "https://github.com/StartHua/ComfyUI_Seg_VITON": [ + [ + "segformer_agnostic", + "segformer_clothes", + "segformer_remove_bg", + "stabel_vition" + ], + { + "title_aux": "ComfyUI_Seg_VITON" + } + ], + "https://github.com/StartHua/Comfyui_joytag": [ + [ + "CXH_JoyTag" + ], + { + "title_aux": "Comfyui_joytag" + } + ], + "https://github.com/StartHua/Comfyui_segformer_b2_clothes": [ + [ + "segformer_b2_clothes" + ], + { + "title_aux": "comfyui_segformer_b2_clothes" + } + ], + "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes": [ + [ + "CR 8 Channel In", + "CR 8 Channel Out", + "CR Apply ControlNet", + "CR Apply LoRA Stack", + "CR Apply Model Merge", + "CR Apply Multi Upscale", + "CR Apply Multi-ControlNet", + "CR Arabic Text RTL", + "CR Aspect Ratio", + "CR Aspect Ratio Banners", + "CR Aspect Ratio SDXL", + "CR Aspect Ratio Social Media", + "CR Batch Images From List", + "CR Batch Process Switch", + "CR Binary Pattern", + "CR Binary To Bit List", + "CR Bit Schedule", + "CR Central Schedule", + "CR Checker Pattern", + "CR Clamp Value", + "CR Clip Input Switch", + "CR Color Bars", + "CR Color Gradient", + "CR Color Panel", + "CR Color Tint", + "CR Combine Prompt", + "CR Combine Schedules", + "CR Comic Panel Templates", + "CR Composite Text", + "CR Conditioning Input Switch", + "CR Conditioning Mixer", + "CR ControlNet Input Switch", + "CR Current Frame", + "CR Cycle Images", + "CR Cycle Images Simple", + "CR Cycle LoRAs", + "CR Cycle Models", + "CR Cycle Text", + "CR Cycle Text Simple", + "CR Data Bus In", + "CR Data Bus Out", + "CR Debatch Frames", + "CR Diamond Panel", + "CR Draw Perspective Text", + "CR Draw Pie", + "CR Draw Shape", + "CR Draw Text", + "CR Encode Scheduled Prompts", + "CR Feathered Border", + "CR Float Range List", + "CR Float To Integer", + "CR Float To String", + "CR Font File List", + "CR Get Parameter From Prompt", + "CR Gradient Float", + "CR Gradient Integer", + "CR Half Drop Panel", + "CR Halftone Filter", + "CR Halftone Grid", + "CR Hires Fix Process Switch", + "CR Image Border", + "CR Image Grid Panel", + "CR Image Input Switch", + "CR Image Input Switch (4 way)", + "CR Image List", + "CR Image List Simple", + "CR Image Output", + "CR Image Panel", + "CR Image Pipe Edit", + "CR Image Pipe In", + "CR Image Pipe Out", + "CR Image Size", + "CR Img2Img Process Switch", + "CR Increment Float", + "CR Increment Integer", + "CR Index", + "CR Index Increment", + "CR Index Multiply", + "CR Index Reset", + "CR Input Text List", + "CR Integer Multiple", + "CR Integer Range List", + "CR Integer To String", + "CR Interpolate Latents", + "CR Intertwine Lists", + "CR Keyframe List", + "CR Latent Batch Size", + "CR Latent Input Switch", + "CR LoRA List", + "CR LoRA Stack", + "CR Load Animation Frames", + "CR Load Flow Frames", + "CR Load GIF As List", + "CR Load Image List", + "CR Load Image List Plus", + "CR Load LoRA", + "CR Load Prompt Style", + "CR Load Schedule From File", + "CR Load Scheduled ControlNets", + "CR Load Scheduled LoRAs", + "CR Load Scheduled Models", + "CR Load Text List", + "CR Mask Text", + "CR Math Operation", + "CR Model Input Switch", + "CR Model List", + "CR Model Merge Stack", + "CR Module Input", + "CR Module Output", + "CR Module Pipe Loader", + "CR Multi Upscale Stack", + "CR Multi-ControlNet Stack", + "CR Multiline Text", + "CR Output Flow Frames", + "CR Output Schedule To File", + "CR Overlay Text", + "CR Overlay Transparent Image", + "CR Page Layout", + "CR Pipe Switch", + "CR Polygons", + "CR Prompt List", + "CR Prompt List Keyframes", + "CR Prompt Scheduler", + "CR Prompt Text", + "CR Radial Gradient", + "CR Random Hex Color", + "CR Random LoRA Stack", + "CR Random Multiline Colors", + "CR Random Multiline Values", + "CR Random Panel Codes", + "CR Random RGB", + "CR Random RGB Gradient", + "CR Random Shape Pattern", + "CR Random Weight LoRA", + "CR Repeater", + "CR SD1.5 Aspect Ratio", + "CR SDXL Aspect Ratio", + "CR SDXL Base Prompt Encoder", + "CR SDXL Prompt Mix Presets", + "CR SDXL Prompt Mixer", + "CR SDXL Style Text", + "CR Save Text To File", + "CR Schedule Input Switch", + "CR Schedule To ScheduleList", + "CR Seamless Checker", + "CR Seed", + "CR Seed to Int", + "CR Select Font", + "CR Select ISO Size", + "CR Select Model", + "CR Select Resize Method", + "CR Set Switch From String", + "CR Set Value On Binary", + "CR Set Value On Boolean", + "CR Set Value on String", + "CR Simple Banner", + "CR Simple Binary Pattern", + "CR Simple Binary Pattern Simple", + "CR Simple Image Compare", + "CR Simple List", + "CR Simple Meme Template", + "CR Simple Prompt List", + "CR Simple Prompt List Keyframes", + "CR Simple Prompt Scheduler", + "CR Simple Schedule", + "CR Simple Text Panel", + "CR Simple Text Scheduler", + "CR Simple Text Watermark", + "CR Simple Titles", + "CR Simple Value Scheduler", + "CR Split String", + "CR Starburst Colors", + "CR Starburst Lines", + "CR String To Boolean", + "CR String To Combo", + "CR String To Number", + "CR Style Bars", + "CR Switch Model and CLIP", + "CR Text", + "CR Text Blacklist", + "CR Text Concatenate", + "CR Text Cycler", + "CR Text Input Switch", + "CR Text Input Switch (4 way)", + "CR Text Length", + "CR Text List", + "CR Text List Simple", + "CR Text List To String", + "CR Text Operation", + "CR Text Replace", + "CR Text Scheduler", + "CR Thumbnail Preview", + "CR Trigger", + "CR Upscale Image", + "CR VAE Decode", + "CR VAE Input Switch", + "CR Value", + "CR Value Cycler", + "CR Value Scheduler", + "CR Vignette Filter", + "CR XY From Folder", + "CR XY Index", + "CR XY Interpolate", + "CR XY List", + "CR XY Product", + "CR XY Save Grid Image", + "CR XYZ Index", + "CR_Aspect Ratio For Print" + ], + { + "author": "Suzie1", + "description": "175 custom nodes for artists, designers and animators.", + "nickname": "Comfyroll Studio", + "title": "Comfyroll Studio", + "title_aux": "ComfyUI_Comfyroll_CustomNodes" + } + ], + "https://github.com/Sxela/ComfyWarp": [ + [ + "ExtractOpticalFlow", + "LoadFrame", + "LoadFrameFromDataset", + "LoadFrameFromFolder", + "LoadFramePairFromDataset", + "LoadFrameSequence", + "MakeFrameDataset", + "MixConsistencyMaps", + "OffsetNumber", + "ResizeToFit", + "SaveFrame", + "WarpFrame" + ], + { + "title_aux": "ComfyWarp" + } + ], + "https://github.com/TGu-97/ComfyUI-TGu-utils": [ + [ + "MPNReroute", + "MPNSwitch", + "PNSwitch" + ], + { + "title_aux": "TGu Utilities" + } + ], + "https://github.com/THtianhao/ComfyUI-FaceChain": [ + [ + "FC CropAndPaste", + "FC CropBottom", + "FC CropToOrigin", + "FC FaceDetectCrop", + "FC FaceFusion", + "FC FaceSegAndReplace", + "FC FaceSegment", + "FC MaskOP", + "FC RemoveCannyFace", + "FC ReplaceByMask", + "FC StyleLoraLoad" + ], + { + "title_aux": "ComfyUI-FaceChain" + } + ], + "https://github.com/THtianhao/ComfyUI-Portrait-Maker": [ + [ + "PM_BoxCropImage", + "PM_ColorTransfer", + "PM_ExpandMaskBox", + "PM_FaceFusion", + "PM_FaceShapMatch", + "PM_FaceSkin", + "PM_GetImageInfo", + "PM_ImageResizeTarget", + "PM_ImageScaleShort", + "PM_MakeUpTransfer", + "PM_MaskDilateErode", + "PM_MaskMerge2Image", + "PM_PortraitEnhancement", + "PM_RatioMerge2Image", + "PM_ReplaceBoxImg", + "PM_RetinaFace", + "PM_Similarity", + "PM_SkinRetouching", + "PM_SuperColorTransfer", + "PM_SuperMakeUpTransfer" + ], + { + "title_aux": "ComfyUI-Portrait-Maker" + } + ], + "https://github.com/TRI3D-LC/tri3d-comfyui-nodes": [ + [ + "tri3d-adjust-neck", + "tri3d-atr-parse", + "tri3d-atr-parse-batch", + "tri3d-clipdrop-bgremove-api", + "tri3d-dwpose", + "tri3d-extract-hand", + "tri3d-extract-parts-batch", + "tri3d-extract-parts-batch2", + "tri3d-extract-parts-mask-batch", + "tri3d-face-recognise", + "tri3d-float-to-image", + "tri3d-fuzzification", + "tri3d-image-mask-2-box", + "tri3d-image-mask-box-2-image", + "tri3d-interaction-canny", + "tri3d-load-pose-json", + "tri3d-pose-adaption", + "tri3d-pose-to-image", + "tri3d-position-hands", + "tri3d-position-parts-batch", + "tri3d-recolor-mask", + "tri3d-recolor-mask-LAB_space", + "tri3d-recolor-mask-LAB_space_manual", + "tri3d-recolor-mask-RGB_space", + "tri3d-skin-feathered-padded-mask", + "tri3d-swap-pixels" + ], + { + "title_aux": "tri3d-comfyui-nodes" + } + ], + "https://github.com/Taremin/comfyui-prompt-extranetworks": [ + [ + "PromptExtraNetworks" + ], + { + "title_aux": "ComfyUI Prompt ExtraNetworks" + } + ], + "https://github.com/Taremin/comfyui-string-tools": [ + [ + "StringToolsBalancedChoice", + "StringToolsConcat", + "StringToolsRandomChoice", + "StringToolsString", + "StringToolsText" + ], + { + "title_aux": "ComfyUI String Tools" + } + ], + "https://github.com/TeaCrab/ComfyUI-TeaNodes": [ + [ + "TC_ColorFill", + "TC_EqualizeCLAHE", + "TC_ImageResize", + "TC_ImageScale", + "TC_RandomColorFill", + "TC_SizeApproximation" + ], + { + "title_aux": "ComfyUI-TeaNodes" + } + ], + "https://github.com/TemryL/ComfyS3": [ + [ + "DownloadFileS3", + "LoadImageS3", + "SaveImageS3", + "SaveVideoFilesS3", + "UploadFileS3" + ], + { + "title_aux": "ComfyS3" + } + ], + "https://github.com/TheBarret/ZSuite": [ + [ + "ZSuite: Prompter", + "ZSuite: RF Noise", + "ZSuite: SeedMod" + ], + { + "title_aux": "ZSuite" + } + ], + "https://github.com/TinyTerra/ComfyUI_tinyterraNodes": [ + [ + "ttN busIN", + "ttN busOUT", + "ttN compareInput", + "ttN concat", + "ttN debugInput", + "ttN float", + "ttN hiresfixScale", + "ttN imageOutput", + "ttN imageREMBG", + "ttN int", + "ttN multiModelMerge", + "ttN pipe2BASIC", + "ttN pipe2DETAILER", + "ttN pipeEDIT", + "ttN pipeEncodeConcat", + "ttN pipeIN", + "ttN pipeKSampler", + "ttN pipeKSamplerAdvanced", + "ttN pipeKSamplerSDXL", + "ttN pipeLoader", + "ttN pipeLoaderSDXL", + "ttN pipeLoraStack", + "ttN pipeOUT", + "ttN seed", + "ttN seedDebug", + "ttN text", + "ttN text3BOX_3WAYconcat", + "ttN text7BOX_concat", + "ttN textDebug", + "ttN xyPlot" + ], + { + "author": "tinyterra", + "description": "This extension offers various pipe nodes, fullscreen image viewer based on node history, dynamic widgets, interface customization, and more.", + "nickname": "ttNodes", + "nodename_pattern": "^ttN ", + "title": "tinyterraNodes", + "title_aux": "tinyterraNodes" + } + ], + "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler": [ + [ + "menus" + ], + { + "title_aux": "ComfyUI_MileHighStyler" + } + ], + "https://github.com/Tropfchen/ComfyUI-Embedding_Picker": [ + [ + "EmbeddingPicker" + ], + { + "title_aux": "Embedding Picker" + } + ], + "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector": [ + [ + "YARS", + "YARSAdv" + ], + { + "title_aux": "YARS: Yet Another Resolution Selector" + } + ], + "https://github.com/Trung0246/ComfyUI-0246": [ + [ + "0246.Beautify", + "0246.BoxRange", + "0246.CastReroute", + "0246.Cloud", + "0246.Convert", + "0246.Count", + "0246.Highway", + "0246.HighwayBatch", + "0246.Hold", + "0246.Hub", + "0246.Junction", + "0246.JunctionBatch", + "0246.Loop", + "0246.Merge", + "0246.Meta", + "0246.Pick", + "0246.RandomInt", + "0246.Script", + "0246.ScriptNode", + "0246.ScriptPile", + "0246.ScriptRule", + "0246.Stringify", + "0246.Switch" + ], + { + "author": "Trung0246", + "description": "Random nodes for ComfyUI I made to solve my struggle with ComfyUI (ex: pipe, process). Have varying quality.", + "nickname": "ComfyUI-0246", + "title": "ComfyUI-0246", + "title_aux": "ComfyUI-0246" + } + ], + "https://github.com/Ttl/ComfyUi_NNLatentUpscale": [ + [ + "NNLatentUpscale" + ], + { + "title_aux": "ComfyUI Neural network latent upscale custom node" + } + ], + "https://github.com/Umikaze-job/select_folder_path_easy": [ + [ + "SelectFolderPathEasy" + ], + { + "title_aux": "select_folder_path_easy" + } + ], + "https://github.com/WASasquatch/ASTERR": [ + [ + "ASTERR", + "SaveASTERR" + ], + { + "title_aux": "ASTERR" + } + ], + "https://github.com/WASasquatch/ComfyUI_Preset_Merger": [ + [ + "Preset_Model_Merge" + ], + { + "title_aux": "ComfyUI Preset Merger" + } + ], + "https://github.com/WASasquatch/FreeU_Advanced": [ + [ + "FreeU (Advanced)", + "FreeU_V2 (Advanced)" + ], + { + "title_aux": "FreeU_Advanced" + } + ], + "https://github.com/WASasquatch/PPF_Noise_ComfyUI": [ + [ + "Blend Latents (PPF Noise)", + "Cross-Hatch Power Fractal (PPF Noise)", + "Images as Latents (PPF Noise)", + "Perlin Power Fractal Latent (PPF Noise)" + ], + { + "title_aux": "PPF_Noise_ComfyUI" + } + ], + "https://github.com/WASasquatch/PowerNoiseSuite": [ + [ + "Blend Latents (PPF Noise)", + "Cross-Hatch Power Fractal (PPF Noise)", + "Cross-Hatch Power Fractal Settings (PPF Noise)", + "Images as Latents (PPF Noise)", + "Latent Adjustment (PPF Noise)", + "Latents to CPU (PPF Noise)", + "Linear Cross-Hatch Power Fractal (PPF Noise)", + "Perlin Power Fractal Latent (PPF Noise)", + "Perlin Power Fractal Settings (PPF Noise)", + "Power KSampler Advanced (PPF Noise)", + "Power-Law Noise (PPF Noise)" + ], + { + "title_aux": "Power Noise Suite for ComfyUI" + } + ], + "https://github.com/WASasquatch/WAS_Extras": [ + [ + "BLVAEEncode", + "CLIPTextEncodeList", + "CLIPTextEncodeSequence2", + "ConditioningBlend", + "DebugInput", + "KSamplerSeq", + "KSamplerSeq2", + "VAEEncodeForInpaint (WAS)", + "VividSharpen" + ], + { + "title_aux": "WAS_Extras" + } + ], + "https://github.com/WASasquatch/was-node-suite-comfyui": [ + [ + "BLIP Analyze Image", + "BLIP Model Loader", + "Blend Latents", + "Boolean To Text", + "Bounded Image Blend", + "Bounded Image Blend with Mask", + "Bounded Image Crop", + "Bounded Image Crop with Mask", + "Bus Node", + "CLIP Input Switch", + "CLIP Vision Input Switch", + "CLIPSeg Batch Masking", + "CLIPSeg Masking", + "CLIPSeg Model Loader", + "CLIPTextEncode (BlenderNeko Advanced + NSP)", + "CLIPTextEncode (NSP)", + "Cache Node", + "Checkpoint Loader", + "Checkpoint Loader (Simple)", + "Conditioning Input Switch", + "Constant Number", + "Control Net Model Input Switch", + "Convert Masks to Images", + "Create Grid Image", + "Create Grid Image from Batch", + "Create Morph Image", + "Create Morph Image from Path", + "Create Video from Path", + "Debug Number to Console", + "Dictionary to Console", + "Diffusers Hub Model Down-Loader", + "Diffusers Model Loader", + "Export API", + "Image Analyze", + "Image Aspect Ratio", + "Image Batch", + "Image Blank", + "Image Blend", + "Image Blend by Mask", + "Image Blending Mode", + "Image Bloom Filter", + "Image Bounds", + "Image Bounds to Console", + "Image Canny Filter", + "Image Chromatic Aberration", + "Image Color Palette", + "Image Crop Face", + "Image Crop Location", + "Image Crop Square Location", + "Image Displacement Warp", + "Image Dragan Photography Filter", + "Image Edge Detection Filter", + "Image Film Grain", + "Image Filter Adjustments", + "Image Flip", + "Image Generate Gradient", + "Image Gradient Map", + "Image High Pass Filter", + "Image History Loader", + "Image Input Switch", + "Image Levels Adjustment", + "Image Load", + "Image Lucy Sharpen", + "Image Median Filter", + "Image Mix RGB Channels", + "Image Monitor Effects Filter", + "Image Nova Filter", + "Image Padding", + "Image Paste Crop", + "Image Paste Crop by Location", + "Image Paste Face", + "Image Perlin Noise", + "Image Perlin Power Fractal", + "Image Pixelate", + "Image Power Noise", + "Image Rembg (Remove Background)", + "Image Remove Background (Alpha)", + "Image Remove Color", + "Image Resize", + "Image Rotate", + "Image Rotate Hue", + "Image SSAO (Ambient Occlusion)", + "Image SSDO (Direct Occlusion)", + "Image Save", + "Image Seamless Texture", + "Image Select Channel", + "Image Select Color", + "Image Shadows and Highlights", + "Image Size to Number", + "Image Stitch", + "Image Style Filter", + "Image Threshold", + "Image Tiled", + "Image Transpose", + "Image Voronoi Noise Filter", + "Image fDOF Filter", + "Image to Latent Mask", + "Image to Noise", + "Image to Seed", + "Images to Linear", + "Images to RGB", + "Inset Image Bounds", + "Integer place counter", + "KSampler (WAS)", + "KSampler Cycle", + "Latent Batch", + "Latent Input Switch", + "Latent Noise Injection", + "Latent Size to Number", + "Latent Upscale by Factor (WAS)", + "Load Cache", + "Load Image Batch", + "Load Lora", + "Load Text File", + "Logic Boolean", + "Logic Boolean Primitive", + "Logic Comparison AND", + "Logic Comparison OR", + "Logic Comparison XOR", + "Logic NOT", + "Lora Input Switch", + "Lora Loader", + "Mask Arbitrary Region", + "Mask Batch", + "Mask Batch to Mask", + "Mask Ceiling Region", + "Mask Crop Dominant Region", + "Mask Crop Minority Region", + "Mask Crop Region", + "Mask Dilate Region", + "Mask Dominant Region", + "Mask Erode Region", + "Mask Fill Holes", + "Mask Floor Region", + "Mask Gaussian Region", + "Mask Invert", + "Mask Minority Region", + "Mask Paste Region", + "Mask Smooth Region", + "Mask Threshold Region", + "Masks Add", + "Masks Combine Batch", + "Masks Combine Regions", + "Masks Subtract", + "MiDaS Depth Approximation", + "MiDaS Mask Image", + "MiDaS Model Loader", + "Model Input Switch", + "Number Counter", + "Number Input Condition", + "Number Input Switch", + "Number Multiple Of", + "Number Operation", + "Number PI", + "Number to Float", + "Number to Int", + "Number to Seed", + "Number to String", + "Number to Text", + "Prompt Multiple Styles Selector", + "Prompt Styles Selector", + "Random Number", + "SAM Image Mask", + "SAM Model Loader", + "SAM Parameters", + "SAM Parameters Combine", + "Samples Passthrough (Stat System)", + "Save Text File", + "Seed", + "String to Text", + "Tensor Batch to Image", + "Text Add Token by Input", + "Text Add Tokens", + "Text Compare", + "Text Concatenate", + "Text Contains", + "Text Dictionary Convert", + "Text Dictionary Get", + "Text Dictionary Keys", + "Text Dictionary New", + "Text Dictionary To Text", + "Text Dictionary Update", + "Text File History Loader", + "Text Find and Replace", + "Text Find and Replace Input", + "Text Find and Replace by Dictionary", + "Text Input Switch", + "Text List", + "Text List Concatenate", + "Text List to Text", + "Text Load Line From File", + "Text Multiline", + "Text Parse A1111 Embeddings", + "Text Parse Noodle Soup Prompts", + "Text Parse Tokens", + "Text Random Line", + "Text Random Prompt", + "Text Shuffle", + "Text String", + "Text String Truncate", + "Text to Conditioning", + "Text to Console", + "Text to Number", + "Text to String", + "True Random.org Number Generator", + "Upscale Model Loader", + "Upscale Model Switch", + "VAE Input Switch", + "Video Dump Frames", + "Write to GIF", + "Write to Video", + "unCLIP Checkpoint Loader" + ], + { + "title_aux": "WAS Node Suite" + } + ], + "https://github.com/WebDev9000/WebDev9000-Nodes": [ + [ + "IgnoreBraces", + "SettingsSwitch" + ], + { + "title_aux": "WebDev9000-Nodes" + } + ], + "https://github.com/YMC-GitHub/ymc-node-suite-comfyui": [ + [ + "canvas-util-cal-size", + "conditioning-util-input-switch", + "cutoff-region-util", + "hks-util-cal-denoise-step", + "img-util-get-image-size", + "img-util-switch-input-image", + "io-image-save", + "io-text-save", + "io-util-file-list-get", + "io-util-file-list-get-text", + "number-util-random-num", + "pipe-util-to-basic-pipe", + "region-util-get-by-center-and-size", + "region-util-get-by-lt", + "region-util-get-crop-location-from-center-size-text", + "region-util-get-pad-out-location-by-size", + "text-preset-colors", + "text-util-join-text", + "text-util-loop-text", + "text-util-path-list", + "text-util-prompt-add-prompt", + "text-util-prompt-adv-dup", + "text-util-prompt-adv-search", + "text-util-prompt-del", + "text-util-prompt-dup", + "text-util-prompt-join", + "text-util-prompt-search", + "text-util-prompt-shuffle", + "text-util-prompt-std", + "text-util-prompt-unweight", + "text-util-random-text", + "text-util-search-text", + "text-util-show-text", + "text-util-switch-text", + "xyz-util-txt-to-int" + ], + { + "title_aux": "ymc-node-suite-comfyui" + } + ], + "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes": [ + [ + "Example", + "TacoAnimatedLoader", + "TacoGifMaker", + "TacoImg2ImgAnimatedLoader", + "TacoImg2ImgAnimatedProcessor", + "TacoLatent" + ], + { + "title_aux": "ComfyUI-TacoNodes" + } + ], + "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI": [ + [ + "MergeBlockWeighted" + ], + { + "title_aux": "MergeBlockWeighted_fo_ComfyUI" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery": [ + [ + "ArtGallery_Zho", + "ArtistsImage_Zho", + "CamerasImage_Zho", + "FilmsImage_Zho", + "MovementsImage_Zho", + "StylesImage_Zho" + ], + { + "title_aux": "ComfyUI-ArtGallery" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini": [ + [ + "ConcatText_Zho", + "DisplayText_Zho", + "Gemini_API_Chat_Zho", + "Gemini_API_S_Chat_Zho", + "Gemini_API_S_Vsion_ImgURL_Zho", + "Gemini_API_S_Zho", + "Gemini_API_Vsion_ImgURL_Zho", + "Gemini_API_Zho" + ], + { + "title_aux": "ComfyUI-Gemini" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID": [ + [ + "IDBaseModelLoader_fromhub", + "IDBaseModelLoader_local", + "IDControlNetLoader", + "IDGenerationNode", + "ID_Prompt_Styler", + "InsightFaceLoader_Zho", + "Ipadapter_instantidLoader" + ], + { + "title_aux": "ComfyUI-InstantID" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO": [ + [ + "BaseModel_Loader_fromhub", + "BaseModel_Loader_local", + "LoRALoader", + "NEW_PhotoMaker_Generation", + "PhotoMakerAdapter_Loader_fromhub", + "PhotoMakerAdapter_Loader_local", + "PhotoMaker_Generation", + "Prompt_Styler", + "Ref_Image_Preprocessing" + ], + { + "title_aux": "ComfyUI PhotoMaker (ZHO)" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align": [ + [ + "QAlign_Zho" + ], + { + "title_aux": "ComfyUI-Q-Align" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API": [ + [ + "QWenVL_API_S_Multi_Zho", + "QWenVL_API_S_Zho" + ], + { + "title_aux": "ComfyUI-Qwen-VL-API" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO": [ + [ + "SVD_Aspect_Ratio_Zho", + "SVD_Steps_MotionStrength_Seed_Zho", + "SVD_Styler_Zho" + ], + { + "title_aux": "ComfyUI-SVD-ZHO (WIP)" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE": [ + [ + "SMoE_Generation_Zho", + "SMoE_ModelLoader_Zho" + ], + { + "title_aux": "ComfyUI SegMoE" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite": [ + [ + "AlphaChanelAddByMask", + "ImageCompositeBy_BG_Zho", + "ImageCompositeBy_Zho", + "ImageComposite_BG_Zho", + "ImageComposite_Zho", + "RGB_Image_Zho", + "Text_Image_Frame_Zho", + "Text_Image_Multiline_Zho", + "Text_Image_Zho" + ], + { + "title_aux": "ComfyUI-Text_Image-Composite [WIP]" + } + ], + "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn": [ + [ + "PortraitMaster_\u4e2d\u6587\u7248" + ], + { + "title_aux": "comfyui-portrait-master-zh-cn" + } + ], + "https://github.com/ZaneA/ComfyUI-ImageReward": [ + [ + "ImageRewardLoader", + "ImageRewardScore" + ], + { + "title_aux": "ImageReward" + } + ], + "https://github.com/Zuellni/ComfyUI-ExLlama": [ + [ + "ZuellniExLlamaGenerator", + "ZuellniExLlamaLoader", + "ZuellniTextPreview", + "ZuellniTextReplace" + ], + { + "title_aux": "ComfyUI-ExLlama" + } + ], + "https://github.com/Zuellni/ComfyUI-PickScore-Nodes": [ + [ + "ZuellniPickScoreImageProcessor", + "ZuellniPickScoreLoader", + "ZuellniPickScoreSelector", + "ZuellniPickScoreTextProcessor" + ], + { + "title_aux": "ComfyUI PickScore Nodes" + } + ], + "https://github.com/a1lazydog/ComfyUI-AudioScheduler": [ + [ + "AmplitudeToGraph", + "AmplitudeToNumber", + "AudioToAmplitudeGraph", + "AudioToFFTs", + "BatchAmplitudeSchedule", + "ClipAmplitude", + "GateNormalizedAmplitude", + "LoadAudio", + "NormalizeAmplitude", + "NormalizedAmplitudeDrivenString", + "NormalizedAmplitudeToGraph", + "NormalizedAmplitudeToNumber", + "TransientAmplitudeBasic" + ], + { + "title_aux": "ComfyUI-AudioScheduler" + } + ], + "https://github.com/abdozmantar/ComfyUI-InstaSwap": [ + [ + "InstaSwapFaceSwap", + "InstaSwapLoadFaceModel", + "InstaSwapSaveFaceModel" + ], + { + "title_aux": "InstaSwap Face Swap Node for ComfyUI" + } + ], + "https://github.com/abyz22/image_control": [ + [ + "abyz22_Convertpipe", + "abyz22_Editpipe", + "abyz22_FirstNonNull", + "abyz22_FromBasicPipe_v2", + "abyz22_Frompipe", + "abyz22_ImpactWildcardEncode", + "abyz22_ImpactWildcardEncode_GetPrompt", + "abyz22_Ksampler", + "abyz22_Padding Image", + "abyz22_RemoveControlnet", + "abyz22_SaveImage", + "abyz22_SetQueue", + "abyz22_ToBasicPipe", + "abyz22_Topipe", + "abyz22_blend_onecolor", + "abyz22_blendimages", + "abyz22_bypass", + "abyz22_drawmask", + "abyz22_lamaInpaint", + "abyz22_lamaPreprocessor", + "abyz22_makecircles", + "abyz22_setimageinfo", + "abyz22_smallhead" + ], + { + "title_aux": "image_control" + } + ], + "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface": [ + [ + "DownloadLinkChecker", + "ShowFileNames" + ], + { + "title_aux": "ComfyUI-TrashNodes-DownloadHuggingface" + } + ], + "https://github.com/adieyal/comfyui-dynamicprompts": [ + [ + "DPCombinatorialGenerator", + "DPFeelingLucky", + "DPJinja", + "DPMagicPrompt", + "DPOutput", + "DPRandomGenerator" + ], + { + "title_aux": "DynamicPrompts Custom Nodes" + } + ], + "https://github.com/adriflex/ComfyUI_Blender_Texdiff": [ + [ + "ViewportColor", + "ViewportDepth" + ], + { + "title_aux": "ComfyUI_Blender_Texdiff" + } + ], + "https://github.com/aegis72/aegisflow_utility_nodes": [ + [ + "Add Text To Image", + "Aegisflow CLIP Pass", + "Aegisflow Conditioning Pass", + "Aegisflow Image Pass", + "Aegisflow Latent Pass", + "Aegisflow Mask Pass", + "Aegisflow Model Pass", + "Aegisflow Pos/Neg Pass", + "Aegisflow SDXL Tuple Pass", + "Aegisflow VAE Pass", + "Aegisflow controlnet preprocessor bus", + "Apply Instagram Filter", + "Brightness_Contrast_Ally", + "Flatten Colors", + "Gaussian Blur_Ally", + "GlitchThis Effect", + "Hue Rotation", + "Image Flip_ally", + "Placeholder Tuple", + "Swap Color Mode", + "aegisflow Multi_Pass", + "aegisflow Multi_Pass XL", + "af_pipe_in_15", + "af_pipe_in_xl", + "af_pipe_out_15", + "af_pipe_out_xl" + ], + { + "title_aux": "AegisFlow Utility Nodes" + } + ], + "https://github.com/aegis72/comfyui-styles-all": [ + [ + "menus" + ], + { + "title_aux": "ComfyUI-styles-all" + } + ], + "https://github.com/ai-liam/comfyui_liam_util": [ + [ + "LiamLoadImage" + ], + { + "title_aux": "LiamUtil" + } + ], + "https://github.com/aianimation55/ComfyUI-FatLabels": [ + [ + "FatLabels" + ], + { + "title_aux": "Comfy UI FatLabels" + } + ], + "https://github.com/alexopus/ComfyUI-Image-Saver": [ + [ + "Cfg Literal (Image Saver)", + "Checkpoint Loader with Name (Image Saver)", + "Float Literal (Image Saver)", + "Image Saver", + "Int Literal (Image Saver)", + "Sampler Selector (Image Saver)", + "Scheduler Selector (Image Saver)", + "Seed Generator (Image Saver)", + "String Literal (Image Saver)", + "Width/Height Literal (Image Saver)" + ], + { + "title_aux": "ComfyUI Image Saver" + } + ], + "https://github.com/alpertunga-bile/prompt-generator-comfyui": [ + [ + "Prompt Generator" + ], + { + "title_aux": "prompt-generator" + } + ], + "https://github.com/alsritter/asymmetric-tiling-comfyui": [ + [ + "Asymmetric_Tiling_KSampler" + ], + { + "title_aux": "asymmetric-tiling-comfyui" + } + ], + "https://github.com/alt-key-project/comfyui-dream-project": [ + [ + "Analyze Palette [Dream]", + "Beat Curve [Dream]", + "Big Float Switch [Dream]", + "Big Image Switch [Dream]", + "Big Int Switch [Dream]", + "Big Latent Switch [Dream]", + "Big Palette Switch [Dream]", + "Big Text Switch [Dream]", + "Boolean To Float [Dream]", + "Boolean To Int [Dream]", + "Build Prompt [Dream]", + "CSV Curve [Dream]", + "CSV Generator [Dream]", + "Calculation [Dream]", + "Common Frame Dimensions [Dream]", + "Compare Palettes [Dream]", + "FFMPEG Video Encoder [Dream]", + "File Count [Dream]", + "Finalize Prompt [Dream]", + "Float Input [Dream]", + "Float to Log Entry [Dream]", + "Frame Count Calculator [Dream]", + "Frame Counter (Directory) [Dream]", + "Frame Counter (Simple) [Dream]", + "Frame Counter Info [Dream]", + "Frame Counter Offset [Dream]", + "Frame Counter Time Offset [Dream]", + "Image Brightness Adjustment [Dream]", + "Image Color Shift [Dream]", + "Image Contrast Adjustment [Dream]", + "Image Motion [Dream]", + "Image Sequence Blend [Dream]", + "Image Sequence Loader [Dream]", + "Image Sequence Saver [Dream]", + "Image Sequence Tweening [Dream]", + "Int Input [Dream]", + "Int to Log Entry [Dream]", + "Laboratory [Dream]", + "Linear Curve [Dream]", + "Log Entry Joiner [Dream]", + "Log File [Dream]", + "Noise from Area Palettes [Dream]", + "Noise from Palette [Dream]", + "Palette Color Align [Dream]", + "Palette Color Shift [Dream]", + "Sample Image Area as Palette [Dream]", + "Sample Image as Palette [Dream]", + "Saw Curve [Dream]", + "Sine Curve [Dream]", + "Smooth Event Curve [Dream]", + "String Input [Dream]", + "String Tokenizer [Dream]", + "String to Log Entry [Dream]", + "Text Input [Dream]", + "Triangle Curve [Dream]", + "Triangle Event Curve [Dream]", + "WAV Curve [Dream]" + ], + { + "title_aux": "Dream Project Animation Nodes" + } + ], + "https://github.com/alt-key-project/comfyui-dream-video-batches": [ + [ + "Blended Transition [DVB]", + "Calculation [DVB]", + "Create Frame Set [DVB]", + "Divide [DVB]", + "Fade From Black [DVB]", + "Fade To Black [DVB]", + "Float Input [DVB]", + "For Each Done [DVB]", + "For Each Filename [DVB]", + "Frame Set Append [DVB]", + "Frame Set Frame Dimensions Scaled [DVB]", + "Frame Set Index Offset [DVB]", + "Frame Set Merger [DVB]", + "Frame Set Reindex [DVB]", + "Frame Set Repeat [DVB]", + "Frame Set Reverse [DVB]", + "Frame Set Split Beginning [DVB]", + "Frame Set Split End [DVB]", + "Frame Set Splitter [DVB]", + "Generate Inbetween Frames [DVB]", + "Int Input [DVB]", + "Linear Camera Pan [DVB]", + "Linear Camera Roll [DVB]", + "Linear Camera Zoom [DVB]", + "Load Image From Path [DVB]", + "Multiply [DVB]", + "Sine Camera Pan [DVB]", + "Sine Camera Roll [DVB]", + "Sine Camera Zoom [DVB]", + "String Input [DVB]", + "Text Input [DVB]", + "Trace Memory Allocation [DVB]", + "Unwrap Frame Set [DVB]" + ], + { + "title_aux": "Dream Video Batches" + } + ], + "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes": [ + [ + "CLIPTextEncode (RE)", + "CLIPTextEncodeSDXL (RE)", + "CLIPTextEncodeSDXLRefiner (RE)", + "Int (RE)", + "RErouter <=", + "RErouter =>", + "String (RE)" + ], + { + "title_aux": "ComfyUI_RErouter_CustomNodes" + } + ], + "https://github.com/andersxa/comfyui-PromptAttention": [ + [ + "CLIPAttentionMaskEncode" + ], + { + "title_aux": "CLIP Directional Prompt Attention" + } + ], + "https://github.com/antrobot1234/antrobots-comfyUI-nodepack": [ + [ + "composite", + "crop", + "paste", + "preview_mask", + "scale" + ], + { + "title_aux": "antrobots ComfyUI Nodepack" + } + ], + "https://github.com/asagi4/ComfyUI-CADS": [ + [ + "CADS" + ], + { + "title_aux": "ComfyUI-CADS" + } + ], + "https://github.com/asagi4/comfyui-prompt-control": [ + [ + "EditableCLIPEncode", + "FilterSchedule", + "LoRAScheduler", + "PCApplySettings", + "PCPromptFromSchedule", + "PCScheduleSettings", + "PCSplitSampling", + "PromptControlSimple", + "PromptToSchedule", + "ScheduleToCond", + "ScheduleToModel" + ], + { + "title_aux": "ComfyUI prompt control" + } + ], + "https://github.com/asagi4/comfyui-utility-nodes": [ + [ + "MUForceCacheClear", + "MUJinjaRender", + "MUSimpleWildcard" + ], + { + "title_aux": "asagi4/comfyui-utility-nodes" + } + ], + "https://github.com/aszc-dev/ComfyUI-CoreMLSuite": [ + [ + "Core ML Converter", + "Core ML LCM Converter", + "Core ML LoRA Loader", + "CoreMLModelAdapter", + "CoreMLSampler", + "CoreMLSamplerAdvanced", + "CoreMLUNetLoader" + ], + { + "title_aux": "Core ML Suite for ComfyUI" + } + ], + "https://github.com/avatechai/avatar-graph-comfyui": [ + [ + "ApplyMeshTransformAsShapeKey", + "B_ENUM", + "B_VECTOR3", + "B_VECTOR4", + "Combine Points", + "CreateShapeFlow", + "ExportBlendshapes", + "ExportGLTF", + "Extract Boundary Points", + "Image Alpha Mask Merge", + "ImageBridge", + "LoadImageFromRequest", + "LoadImageWithAlpha", + "LoadValueFromRequest", + "SAM MultiLayer", + "Save Image With Workflow" + ], + { + "author": "Avatech Limited", + "description": "Include nodes for sam + bpy operation, that allows workflow creations for generative 2d character rig.", + "nickname": "Avatar Graph", + "title": "Avatar Graph", + "title_aux": "avatar-graph-comfyui" + } + ], + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes": [ + [ + "HaojihuiClipScoreFakeImageProcessor", + "HaojihuiClipScoreImageProcessor", + "HaojihuiClipScoreImageScore", + "HaojihuiClipScoreLoader", + "HaojihuiClipScoreRealImageProcessor", + "HaojihuiClipScoreTextProcessor" + ], + { + "title_aux": "ComfyUI-ClipScore-Nodes" + } + ], + "https://github.com/badjeff/comfyui_lora_tag_loader": [ + [ + "LoraTagLoader" + ], + { + "title_aux": "LoRA Tag Loader for ComfyUI" + } + ], + "https://github.com/banodoco/steerable-motion": [ + [ + "BatchCreativeInterpolation" + ], + { + "title_aux": "Steerable Motion" + } + ], + "https://github.com/bash-j/mikey_nodes": [ + [ + "AddMetaData", + "Batch Crop Image", + "Batch Crop Resize Inplace", + "Batch Load Images", + "Batch Resize Image for SDXL", + "Checkpoint Loader Simple Mikey", + "CinematicLook", + "Empty Latent Ratio Custom SDXL", + "Empty Latent Ratio Select SDXL", + "EvalFloats", + "FaceFixerOpenCV", + "FileNamePrefix", + "FileNamePrefixDateDirFirst", + "Float to String", + "HaldCLUT", + "Image Caption", + "ImageBorder", + "ImageOverlay", + "ImagePaste", + "Int to String", + "LMStudioPrompt", + "Load Image Based on Number", + "LoraSyntaxProcessor", + "Mikey Sampler", + "Mikey Sampler Base Only", + "Mikey Sampler Base Only Advanced", + "Mikey Sampler Tiled", + "Mikey Sampler Tiled Base Only", + "MikeySamplerTiledAdvanced", + "MikeySamplerTiledAdvancedBaseOnly", + "OobaPrompt", + "PresetRatioSelector", + "Prompt With SDXL", + "Prompt With Style", + "Prompt With Style V2", + "Prompt With Style V3", + "Range Float", + "Range Integer", + "Ratio Advanced", + "Resize Image for SDXL", + "Save Image If True", + "Save Image With Prompt Data", + "Save Images Mikey", + "Save Images No Display", + "SaveMetaData", + "SearchAndReplace", + "Seed String", + "Style Conditioner", + "Style Conditioner Base Only", + "Text2InputOr3rdOption", + "TextCombinations", + "TextCombinations3", + "TextConcat", + "TextPreserve", + "Upscale Tile Calculator", + "Wildcard Processor", + "WildcardAndLoraSyntaxProcessor", + "WildcardOobaPrompt" + ], + { + "title_aux": "Mikey Nodes" + } + ], + "https://github.com/bedovyy/ComfyUI_NAIDGenerator": [ + [ + "GenerateNAID", + "Img2ImgOptionNAID", + "InpaintingOptionNAID", + "MaskImageToNAID", + "ModelOptionNAID", + "PromptToNAID" + ], + { + "title_aux": "ComfyUI_NAIDGenerator" + } + ], + "https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py": [ + [ + "CLIPSeg", + "CombineSegMasks" + ], + { + "title_aux": "CLIPSeg" + } + ], + "https://github.com/bilal-arikan/ComfyUI_TextAssets": [ + [ + "LoadTextAsset" + ], + { + "title_aux": "ComfyUI_TextAssets" + } + ], + "https://github.com/blepping/ComfyUI-bleh": [ + [ + "BlehDeepShrink", + "BlehDiscardPenultimateSigma", + "BlehForceSeedSampler", + "BlehHyperTile", + "BlehInsaneChainSampler", + "BlehModelPatchConditional" + ], + { + "title_aux": "ComfyUI-bleh" + } + ], + "https://github.com/blepping/ComfyUI-sonar": [ + [ + "NoisyLatentLike", + "SamplerSonarDPMPPSDE", + "SamplerSonarEuler", + "SamplerSonarEulerA", + "SonarCustomNoise", + "SonarGuidanceConfig" + ], + { + "title_aux": "ComfyUI-sonar" + } + ], + "https://github.com/bmad4ever/comfyui_ab_samplercustom": [ + [ + "AB SamplerCustom (experimental)" + ], + { + "title_aux": "comfyui_ab_sampler" + } + ], + "https://github.com/bmad4ever/comfyui_bmad_nodes": [ + [ + "AdaptiveThresholding", + "Add String To Many", + "AddAlpha", + "AdjustRect", + "AnyToAny", + "BoundingRect (contours)", + "BuildColorRangeAdvanced (hsv)", + "BuildColorRangeHSV (hsv)", + "CLAHE", + "CLIPEncodeMultiple", + "CLIPEncodeMultipleAdvanced", + "ChameleonMask", + "CheckpointLoader (dirty)", + "CheckpointLoaderSimple (dirty)", + "Color (RGB)", + "Color (hexadecimal)", + "Color Clip", + "Color Clip (advanced)", + "Color Clip ADE20k", + "ColorDictionary", + "ColorDictionary (custom)", + "Conditioning (combine multiple)", + "Conditioning (combine selective)", + "Conditioning Grid (cond)", + "Conditioning Grid (string)", + "Conditioning Grid (string) Advanced", + "Contour To Mask", + "Contours", + "ControlNetHadamard", + "ControlNetHadamard (manual)", + "ConvertImg", + "CopyMakeBorder", + "CreateRequestMetadata", + "DistanceTransform", + "Draw Contour(s)", + "EqualizeHistogram", + "ExtendColorList", + "ExtendCondList", + "ExtendFloatList", + "ExtendImageList", + "ExtendIntList", + "ExtendLatentList", + "ExtendMaskList", + "ExtendModelList", + "ExtendStringList", + "FadeMaskEdges", + "Filter Contour", + "FindComplementaryColor", + "FindThreshold", + "FlatLatentsIntoSingleGrid", + "Framed Mask Grab Cut", + "Framed Mask Grab Cut 2", + "FromListGet1Color", + "FromListGet1Cond", + "FromListGet1Float", + "FromListGet1Image", + "FromListGet1Int", + "FromListGet1Latent", + "FromListGet1Mask", + "FromListGet1Model", + "FromListGet1String", + "FromListGetColors", + "FromListGetConds", + "FromListGetFloats", + "FromListGetImages", + "FromListGetInts", + "FromListGetLatents", + "FromListGetMasks", + "FromListGetModels", + "FromListGetStrings", + "Get Contour from list", + "Get Models", + "Get Prompt", + "HypernetworkLoader (dirty)", + "ImageBatchToList", + "InRange (hsv)", + "Inpaint", + "Input/String to Int Array", + "KMeansColor", + "Load 64 Encoded Image", + "LoraLoader (dirty)", + "MaskGrid N KSamplers Advanced", + "MaskOuterBlur", + "Merge Latent Batch Gridwise", + "MonoMerge", + "MorphologicOperation", + "MorphologicSkeletoning", + "NaiveAutoKMeansColor", + "OtsuThreshold", + "RGB to HSV", + "Rect Grab Cut", + "Remap", + "RemapBarrelDistortion", + "RemapFromInsideParabolas", + "RemapFromQuadrilateral (homography)", + "RemapInsideParabolas", + "RemapInsideParabolasAdvanced", + "RemapPinch", + "RemapReverseBarrelDistortion", + "RemapStretch", + "RemapToInnerCylinder", + "RemapToOuterCylinder", + "RemapToQuadrilateral", + "RemapWarpPolar", + "Repeat Into Grid (image)", + "Repeat Into Grid (latent)", + "RequestInputs", + "SampleColorHSV", + "Save Image (api)", + "SeamlessClone", + "SeamlessClone (simple)", + "SetRequestStateToComplete", + "String", + "String to Float", + "String to Integer", + "ToColorList", + "ToCondList", + "ToFloatList", + "ToImageList", + "ToIntList", + "ToLatentList", + "ToMaskList", + "ToModelList", + "ToStringList", + "UnGridify (image)", + "VAEEncodeBatch" + ], + { + "title_aux": "Bmad Nodes" + } + ], + "https://github.com/bmad4ever/comfyui_lists_cartesian_product": [ + [ + "AnyListCartesianProduct" + ], + { + "title_aux": "Lists Cartesian Product" + } + ], + "https://github.com/bradsec/ComfyUI_ResolutionSelector": [ + [ + "ResolutionSelector" + ], + { + "title_aux": "ResolutionSelector for ComfyUI" + } + ], + "https://github.com/braintacles/braintacles-comfyui-nodes": [ + [ + "CLIPTextEncodeSDXL-Multi-IO", + "CLIPTextEncodeSDXL-Pipe", + "Empty Latent Image from Aspect-Ratio", + "Random Find and Replace", + "VAE Decode Pipe", + "VAE Decode Tiled Pipe", + "VAE Encode Pipe", + "VAE Encode Tiled Pipe" + ], + { + "title_aux": "braintacles-nodes" + } + ], + "https://github.com/brianfitzgerald/style_aligned_comfy": [ + [ + "StyleAlignedBatchAlign", + "StyleAlignedReferenceSampler", + "StyleAlignedSampleReferenceLatents" + ], + { + "title_aux": "StyleAligned for ComfyUI" + } + ], + "https://github.com/bronkula/comfyui-fitsize": [ + [ + "FS: Crop Image Into Even Pieces", + "FS: Fit Image And Resize", + "FS: Fit Size From Image", + "FS: Fit Size From Int", + "FS: Image Region To Mask", + "FS: Load Image And Resize To Fit", + "FS: Pick Image From Batch", + "FS: Pick Image From Batches", + "FS: Pick Image From List" + ], + { + "title_aux": "comfyui-fitsize" + } + ], + "https://github.com/bruefire/ComfyUI-SeqImageLoader": [ + [ + "VFrame Loader With Mask Editor", + "Video Loader With Mask Editor" + ], + { + "title_aux": "ComfyUI Sequential Image Loader" + } + ], + "https://github.com/budihartono/comfyui_otonx_nodes": [ + [ + "OTX Integer Multiple Inputs 4", + "OTX Integer Multiple Inputs 5", + "OTX Integer Multiple Inputs 6", + "OTX KSampler Feeder", + "OTX Versatile Multiple Inputs 4", + "OTX Versatile Multiple Inputs 5", + "OTX Versatile Multiple Inputs 6" + ], + { + "title_aux": "Otonx's Custom Nodes" + } + ], + "https://github.com/bvhari/ComfyUI_ImageProcessing": [ + [ + "BilateralFilter", + "Brightness", + "Gamma", + "Hue", + "Saturation", + "SigmoidCorrection", + "UnsharpMask" + ], + { + "title_aux": "ImageProcessing" + } + ], + "https://github.com/bvhari/ComfyUI_LatentToRGB": [ + [ + "LatentToRGB" + ], + { + "title_aux": "LatentToRGB" + } + ], + "https://github.com/bvhari/ComfyUI_PerpWeight": [ + [ + "CLIPTextEncodePerpWeight" + ], + { + "title_aux": "ComfyUI_PerpWeight" + } + ], + "https://github.com/catscandrive/comfyui-imagesubfolders/raw/main/loadImageWithSubfolders.py": [ + [ + "LoadImagewithSubfolders" + ], + { + "title_aux": "Image loader with subfolders" + } + ], + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/google_translator.py": [ + [ + "GoogleTranslator" + ], + { + "title_aux": "ComfyUI SimpleTools Suit" + } + ], + "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner": [ + [ + "LlavaCaptioner" + ], + { + "title_aux": "ComfyUI LLaVA Captioner" + } + ], + "https://github.com/chaojie/ComfyUI-DragNUWA": [ + [ + "BrushMotion", + "CompositeMotionBrush", + "CompositeMotionBrushWithoutModel", + "DragNUWA Run", + "DragNUWA Run MotionBrush", + "Get First Image", + "Get Last Image", + "InstantCameraMotionBrush", + "InstantObjectMotionBrush", + "Load CheckPoint DragNUWA", + "Load MotionBrush From Optical Flow", + "Load MotionBrush From Optical Flow Directory", + "Load MotionBrush From Optical Flow Without Model", + "Load MotionBrush From Tracking Points", + "Load MotionBrush From Tracking Points Without Model", + "Load Pose KeyPoints", + "Loop", + "LoopEnd_IMAGE", + "LoopStart_IMAGE", + "Split Tracking Points" + ], + { + "title_aux": "ComfyUI-DragNUWA" + } + ], + "https://github.com/chaojie/ComfyUI-DynamiCrafter": [ + [ + "DynamiCrafter Simple", + "DynamiCrafterLoader" + ], + { + "title_aux": "ComfyUI-DynamiCrafter" + } + ], + "https://github.com/chaojie/ComfyUI-I2VGEN-XL": [ + [ + "I2VGEN-XL Simple", + "Modelscope Pipeline Loader" + ], + { + "title_aux": "ComfyUI-I2VGEN-XL" + } + ], + "https://github.com/chaojie/ComfyUI-LightGlue": [ + [ + "LightGlue Loader", + "LightGlue Simple", + "LightGlue Simple Multi" + ], + { + "title_aux": "ComfyUI-LightGlue" + } + ], + "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone": [ + [ + "Moore-AnimateAnyone Denoising Unet", + "Moore-AnimateAnyone Image Encoder", + "Moore-AnimateAnyone Pipeline Loader", + "Moore-AnimateAnyone Pose Guider", + "Moore-AnimateAnyone Reference Unet", + "Moore-AnimateAnyone Simple", + "Moore-AnimateAnyone VAE" + ], + { + "title_aux": "ComfyUI-Moore-AnimateAnyone" + } + ], + "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor": [ + [ + "Motion Vector Extractor", + "VideoCombineThenPath" + ], + { + "title_aux": "ComfyUI-Motion-Vector-Extractor" + } + ], + "https://github.com/chaojie/ComfyUI-MotionCtrl": [ + [ + "Load Motion Camera Preset", + "Load Motion Traj Preset", + "Load Motionctrl Checkpoint", + "Motionctrl Cond", + "Motionctrl Sample", + "Motionctrl Sample Simple", + "Select Image Indices" + ], + { + "title_aux": "ComfyUI-MotionCtrl" + } + ], + "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD": [ + [ + "Load Motionctrl-SVD Camera Preset", + "Load Motionctrl-SVD Checkpoint", + "Motionctrl-SVD Sample Simple" + ], + { + "title_aux": "ComfyUI-MotionCtrl-SVD" + } + ], + "https://github.com/chaojie/ComfyUI-Panda3d": [ + [ + "Panda3dAmbientLight", + "Panda3dAttachNewNode", + "Panda3dBase", + "Panda3dDirectionalLight", + "Panda3dLoadDepthModel", + "Panda3dLoadModel", + "Panda3dLoadTexture", + "Panda3dModelMerge", + "Panda3dTest", + "Panda3dTextureMerge" + ], + { + "title_aux": "ComfyUI-Panda3d" + } + ], + "https://github.com/chaojie/ComfyUI-Pymunk": [ + [ + "PygameRun", + "PygameSurface", + "PymunkDynamicBox", + "PymunkDynamicCircle", + "PymunkRun", + "PymunkShapeMerge", + "PymunkSpace", + "PymunkStaticLine" + ], + { + "title_aux": "ComfyUI-Pymunk" + } + ], + "https://github.com/chaojie/ComfyUI-RAFT": [ + [ + "Load MotionBrush", + "RAFT Run", + "Save MotionBrush", + "VizMotionBrush" + ], + { + "title_aux": "ComfyUI-RAFT" + } + ], + "https://github.com/chflame163/ComfyUI_LayerStyle": [ + [ + "LayerColor: Brightness & Contrast", + "LayerColor: ColorAdapter", + "LayerColor: Exposure", + "LayerColor: Gamma", + "LayerColor: HSV", + "LayerColor: LAB", + "LayerColor: LUT Apply", + "LayerColor: RGB", + "LayerColor: YUV", + "LayerFilter: ChannelShake", + "LayerFilter: ColorMap", + "LayerFilter: GaussianBlur", + "LayerFilter: MotionBlur", + "LayerFilter: Sharp & Soft", + "LayerFilter: SkinBeauty", + "LayerFilter: SoftLight", + "LayerFilter: WaterColor", + "LayerMask: CreateGradientMask", + "LayerMask: MaskBoxDetect", + "LayerMask: MaskByDifferent", + "LayerMask: MaskEdgeShrink", + "LayerMask: MaskEdgeUltraDetail", + "LayerMask: MaskGradient", + "LayerMask: MaskGrow", + "LayerMask: MaskInvert", + "LayerMask: MaskMotionBlur", + "LayerMask: MaskPreview", + "LayerMask: MaskStroke", + "LayerMask: PixelSpread", + "LayerMask: RemBgUltra", + "LayerMask: SegmentAnythingUltra", + "LayerStyle: ColorOverlay", + "LayerStyle: DropShadow", + "LayerStyle: GradientOverlay", + "LayerStyle: InnerGlow", + "LayerStyle: InnerShadow", + "LayerStyle: OuterGlow", + "LayerStyle: Stroke", + "LayerUtility: ColorImage", + "LayerUtility: ColorPicker", + "LayerUtility: CropByMask", + "LayerUtility: ExtendCanvas", + "LayerUtility: GetColorTone", + "LayerUtility: GetImageSize", + "LayerUtility: GradientImage", + "LayerUtility: ImageBlend", + "LayerUtility: ImageBlendAdvance", + "LayerUtility: ImageChannelMerge", + "LayerUtility: ImageChannelSplit", + "LayerUtility: ImageMaskScaleAs", + "LayerUtility: ImageOpacity", + "LayerUtility: ImageScaleRestore", + "LayerUtility: ImageShift", + "LayerUtility: LayerImageTransform", + "LayerUtility: LayerMaskTransform", + "LayerUtility: PrintInfo", + "LayerUtility: RestoreCropBox", + "LayerUtility: TextImage", + "LayerUtility: XY to Percent" + ], + { + "title_aux": "ComfyUI Layer Style" + } + ], + "https://github.com/chflame163/ComfyUI_MSSpeech_TTS": [ + [ + "Input Trigger", + "MicrosoftSpeech_TTS", + "Play Sound", + "Play Sound (loop)" + ], + { + "title_aux": "ComfyUI_MSSpeech_TTS" + } + ], + "https://github.com/chflame163/ComfyUI_WordCloud": [ + [ + "ComfyWordCloud", + "LoadTextFile", + "RGB_Picker" + ], + { + "title_aux": "ComfyUI_WordCloud" + } + ], + "https://github.com/chibiace/ComfyUI-Chibi-Nodes": [ + [ + "ConditionText", + "ConditionTextMulti", + "ImageAddText", + "ImageSimpleResize", + "ImageSizeInfo", + "ImageTool", + "Int2String", + "LoadEmbedding", + "LoadImageExtended", + "Loader", + "Prompts", + "RandomResolutionLatent", + "SaveImages", + "SeedGenerator", + "SimpleSampler", + "TextSplit", + "Textbox", + "Wildcards" + ], + { + "title_aux": "ComfyUI-Chibi-Nodes" + } + ], + "https://github.com/chrisgoringe/cg-image-picker": [ + [ + "Preview Chooser", + "Preview Chooser Fabric" + ], + { + "author": "chrisgoringe", + "description": "Custom nodes that preview images and pause the workflow to allow the user to select one or more to progress", + "nickname": "Image Chooser", + "title": "Image Chooser", + "title_aux": "Image chooser" + } + ], + "https://github.com/chrisgoringe/cg-noise": [ + [ + "Hijack", + "KSampler Advanced with Variations", + "KSampler with Variations", + "UnHijack" + ], + { + "title_aux": "Variation seeds" + } + ], + "https://github.com/chrisgoringe/cg-use-everywhere": [ + [ + "Seed Everywhere" + ], + { + "nodename_pattern": "(^(Prompts|Anything) Everywhere|Simple String)", + "title_aux": "Use Everywhere (UE Nodes)" + } + ], + "https://github.com/city96/ComfyUI_ColorMod": [ + [ + "ColorModEdges", + "ColorModPivot", + "LoadImageHighPrec", + "PreviewImageHighPrec", + "SaveImageHighPrec" + ], + { + "title_aux": "ComfyUI_ColorMod" + } + ], + "https://github.com/city96/ComfyUI_DiT": [ + [ + "DiTCheckpointLoader", + "DiTCheckpointLoaderSimple", + "DiTLabelCombine", + "DiTLabelSelect", + "DiTSampler" + ], + { + "title_aux": "ComfyUI_DiT [WIP]" + } + ], + "https://github.com/city96/ComfyUI_ExtraModels": [ + [ + "DiTCondLabelEmpty", + "DiTCondLabelSelect", + "DitCheckpointLoader", + "ExtraVAELoader", + "PixArtCheckpointLoader", + "PixArtDPMSampler", + "PixArtLoraLoader", + "PixArtResolutionSelect", + "PixArtT5TextEncode", + "T5TextEncode", + "T5v11Loader" + ], + { + "title_aux": "Extra Models for ComfyUI" + } + ], + "https://github.com/city96/ComfyUI_NetDist": [ + [ + "CombineImageBatch", + "FetchRemote", + "LoadCurrentWorkflowJSON", + "LoadDiskWorkflowJSON", + "LoadImageUrl", + "LoadLatentNumpy", + "LoadLatentUrl", + "RemoteChainEnd", + "RemoteChainStart", + "RemoteQueueSimple", + "RemoteQueueWorker", + "SaveDiskWorkflowJSON", + "SaveImageUrl", + "SaveLatentNumpy" + ], + { + "title_aux": "ComfyUI_NetDist" + } + ], + "https://github.com/city96/SD-Advanced-Noise": [ + [ + "LatentGaussianNoise", + "MathEncode" + ], + { + "title_aux": "SD-Advanced-Noise" + } + ], + "https://github.com/city96/SD-Latent-Interposer": [ + [ + "LatentInterposer" + ], + { + "title_aux": "Latent-Interposer" + } + ], + "https://github.com/city96/SD-Latent-Upscaler": [ + [ + "LatentUpscaler" + ], + { + "title_aux": "SD-Latent-Upscaler" + } + ], + "https://github.com/civitai/comfy-nodes": [ + [ + "CivitAI_Checkpoint_Loader", + "CivitAI_Lora_Loader" + ], + { + "title_aux": "comfy-nodes" + } + ], + "https://github.com/comfyanonymous/ComfyUI": [ + [ + "BasicScheduler", + "CLIPLoader", + "CLIPMergeSimple", + "CLIPSave", + "CLIPSetLastLayer", + "CLIPTextEncode", + "CLIPTextEncodeControlnet", + "CLIPTextEncodeSDXL", + "CLIPTextEncodeSDXLRefiner", + "CLIPVisionEncode", + "CLIPVisionLoader", + "Canny", + "CheckpointLoader", + "CheckpointLoaderSimple", + "CheckpointSave", + "ConditioningAverage", + "ConditioningCombine", + "ConditioningConcat", + "ConditioningSetArea", + "ConditioningSetAreaPercentage", + "ConditioningSetAreaStrength", + "ConditioningSetMask", + "ConditioningSetTimestepRange", + "ConditioningZeroOut", + "ControlNetApply", + "ControlNetApplyAdvanced", + "ControlNetLoader", + "CropMask", + "DiffControlNetLoader", + "DiffusersLoader", + "DualCLIPLoader", + "EmptyImage", + "EmptyLatentImage", + "ExponentialScheduler", + "FeatherMask", + "FlipSigmas", + "FreeU", + "FreeU_V2", + "GLIGENLoader", + "GLIGENTextBoxApply", + "GrowMask", + "HyperTile", + "HypernetworkLoader", + "ImageBatch", + "ImageBlend", + "ImageBlur", + "ImageColorToMask", + "ImageCompositeMasked", + "ImageCrop", + "ImageFromBatch", + "ImageInvert", + "ImageOnlyCheckpointLoader", + "ImageOnlyCheckpointSave", + "ImagePadForOutpaint", + "ImageQuantize", + "ImageScale", + "ImageScaleBy", + "ImageScaleToTotalPixels", + "ImageSharpen", + "ImageToMask", + "ImageUpscaleWithModel", + "InpaintModelConditioning", + "InvertMask", + "JoinImageWithAlpha", + "KSampler", + "KSamplerAdvanced", + "KSamplerSelect", + "KarrasScheduler", + "LatentAdd", + "LatentBatch", + "LatentBatchSeedBehavior", + "LatentBlend", + "LatentComposite", + "LatentCompositeMasked", + "LatentCrop", + "LatentFlip", + "LatentFromBatch", + "LatentInterpolate", + "LatentMultiply", + "LatentRotate", + "LatentSubtract", + "LatentUpscale", + "LatentUpscaleBy", + "LoadImage", + "LoadImageMask", + "LoadLatent", + "LoraLoader", + "LoraLoaderModelOnly", + "MaskComposite", + "MaskToImage", + "ModelMergeAdd", + "ModelMergeBlocks", + "ModelMergeSimple", + "ModelMergeSubtract", + "ModelSamplingContinuousEDM", + "ModelSamplingDiscrete", + "ModelSamplingStableCascade", + "PatchModelAddDownscale", + "PerpNeg", + "PhotoMakerEncode", + "PhotoMakerLoader", + "PolyexponentialScheduler", + "PorterDuffImageComposite", + "PreviewImage", + "RebatchImages", + "RebatchLatents", + "RepeatImageBatch", + "RepeatLatentBatch", + "RescaleCFG", + "SDTurboScheduler", + "SD_4XUpscale_Conditioning", + "SVD_img2vid_Conditioning", + "SamplerCustom", + "SamplerDPMPP_2M_SDE", + "SamplerDPMPP_SDE", + "SaveAnimatedPNG", + "SaveAnimatedWEBP", + "SaveImage", + "SaveLatent", + "SelfAttentionGuidance", + "SetLatentNoiseMask", + "SolidMask", + "SplitImageWithAlpha", + "SplitSigmas", + "StableCascade_EmptyLatentImage", + "StableCascade_StageB_Conditioning", + "StableZero123_Conditioning", + "StableZero123_Conditioning_Batched", + "StyleModelApply", + "StyleModelLoader", + "TomePatchModel", + "UNETLoader", + "UpscaleModelLoader", + "VAEDecode", + "VAEDecodeTiled", + "VAEEncode", + "VAEEncodeForInpaint", + "VAEEncodeTiled", + "VAELoader", + "VAESave", + "VPScheduler", + "VideoLinearCFGGuidance", + "unCLIPCheckpointLoader", + "unCLIPConditioning" + ], + { + "title_aux": "ComfyUI" + } + ], + "https://github.com/comfyanonymous/ComfyUI_experiments": [ + [ + "ModelMergeBlockNumber", + "ModelMergeSDXL", + "ModelMergeSDXLDetailedTransformers", + "ModelMergeSDXLTransformers", + "ModelSamplerTonemapNoiseTest", + "ReferenceOnlySimple", + "RescaleClassifierFreeGuidanceTest", + "TonemapNoiseWithRescaleCFG" + ], + { + "title_aux": "ComfyUI_experiments" + } + ], + "https://github.com/concarne000/ConCarneNode": [ + [ + "BingImageGrabber", + "Zephyr" + ], + { + "title_aux": "ConCarneNode" + } + ], + "https://github.com/coreyryanhanson/ComfyQR": [ + [ + "comfy-qr-by-image-size", + "comfy-qr-by-module-size", + "comfy-qr-by-module-split", + "comfy-qr-mask_errors" + ], + { + "title_aux": "ComfyQR" + } + ], + "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes": [ + [ + "comfy-qr-read", + "comfy-qr-validate" + ], + { + "title_aux": "ComfyQR-scanning-nodes" + } + ], + "https://github.com/cubiq/ComfyUI_IPAdapter_plus": [ + [ + "IPAdapterApply", + "IPAdapterApplyEncoded", + "IPAdapterApplyFaceID", + "IPAdapterBatchEmbeds", + "IPAdapterEncoder", + "IPAdapterLoadEmbeds", + "IPAdapterModelLoader", + "IPAdapterSaveEmbeds", + "IPAdapterTilesMasked", + "InsightFaceLoader", + "PrepImageForClipVision", + "PrepImageForInsightFace" + ], + { + "title_aux": "ComfyUI_IPAdapter_plus" + } + ], + "https://github.com/cubiq/ComfyUI_InstantID": [ + [ + "ApplyInstantID", + "FaceKeypointsPreprocessor", + "InstantIDFaceAnalysis", + "InstantIDModelLoader" + ], + { + "title_aux": "ComfyUI InstantID (Native Support)" + } + ], + "https://github.com/cubiq/ComfyUI_SimpleMath": [ + [ + "SimpleMath", + "SimpleMathDebug" + ], + { + "title_aux": "Simple Math" + } + ], + "https://github.com/cubiq/ComfyUI_essentials": [ + [ + "BatchCount+", + "CLIPTextEncodeSDXL+", + "ConsoleDebug+", + "DebugTensorShape+", + "DrawText+", + "ExtractKeyframes+", + "GetImageSize+", + "ImageApplyLUT+", + "ImageCASharpening+", + "ImageCompositeFromMaskBatch+", + "ImageCrop+", + "ImageDesaturate+", + "ImageEnhanceDifference+", + "ImageExpandBatch+", + "ImageFlip+", + "ImageFromBatch+", + "ImagePosterize+", + "ImageRemoveBackground+", + "ImageResize+", + "ImageSeamCarving+", + "KSamplerVariationsStochastic+", + "KSamplerVariationsWithNoise+", + "MaskBatch+", + "MaskBlur+", + "MaskExpandBatch+", + "MaskFlip+", + "MaskFromBatch+", + "MaskFromColor+", + "MaskPreview+", + "ModelCompile+", + "NoiseFromImage~", + "RemBGSession+", + "RemoveLatentMask+", + "SDXLEmptyLatentSizePicker+", + "SimpleMath+", + "TransitionMask+" + ], + { + "title_aux": "ComfyUI Essentials" + } + ], + "https://github.com/dagthomas/comfyui_dagthomas": [ + [ + "CSL", + "CSVPromptGenerator", + "PromptGenerator" + ], + { + "title_aux": "SDXL Auto Prompter" + } + ], + "https://github.com/daniel-lewis-ab/ComfyUI-Llama": [ + [ + "Call LLM Advanced", + "Call LLM Basic", + "LLM_Create_Completion Advanced", + "LLM_Detokenize", + "LLM_Embed", + "LLM_Eval", + "LLM_Load_State", + "LLM_Reset", + "LLM_Sample", + "LLM_Save_State", + "LLM_Token_BOS", + "LLM_Token_EOS", + "LLM_Tokenize", + "Load LLM Model Advanced", + "Load LLM Model Basic" + ], + { + "title_aux": "ComfyUI-Llama" + } + ], + "https://github.com/daniel-lewis-ab/ComfyUI-TTS": [ + [ + "Load_Piper_Model", + "Piper_Speak_Text" + ], + { + "title_aux": "ComfyUI-TTS" + } + ], + "https://github.com/darkpixel/darkprompts": [ + [ + "DarkCombine", + "DarkFaceIndexShuffle", + "DarkLoRALoader", + "DarkPrompt" + ], + { + "title_aux": "DarkPrompts" + } + ], + "https://github.com/davask/ComfyUI-MarasIT-Nodes": [ + [ + "MarasitBusNode", + "MarasitBusPipeNode", + "MarasitPipeNodeBasic", + "MarasitUniversalBusNode" + ], + { + "title_aux": "MarasIT Nodes" + } + ], + "https://github.com/dave-palt/comfyui_DSP_imagehelpers": [ + [ + "dsp-imagehelpers-concat" + ], + { + "title_aux": "comfyui_DSP_imagehelpers" + } + ], + "https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py": [ + [ + "KSamplerAdvancedGPU", + "KSamplerGPU" + ], + { + "title_aux": "KSampler GPU" + } + ], + "https://github.com/daxthin/DZ-FaceDetailer": [ + [ + "DZ_Face_Detailer" + ], + { + "title_aux": "DZ-FaceDetailer" + } + ], + "https://github.com/deroberon/StableZero123-comfyui": [ + [ + "SDZero ImageSplit", + "Stablezero123", + "Stablezero123WithDepth" + ], + { + "title_aux": "StableZero123-comfyui" + } + ], + "https://github.com/deroberon/demofusion-comfyui": [ + [ + "Batch Unsampler", + "Demofusion", + "Demofusion From Single File", + "Iterative Mixing KSampler" + ], + { + "title_aux": "demofusion-comfyui" + } + ], + "https://github.com/dfl/comfyui-clip-with-break": [ + [ + "AdvancedCLIPTextEncodeWithBreak", + "CLIPTextEncodeWithBreak" + ], + { + "author": "dfl", + "description": "CLIP text encoder that does BREAK prompting like A1111", + "nickname": "CLIP with BREAK", + "title": "CLIP with BREAK syntax", + "title_aux": "comfyui-clip-with-break" + } + ], + "https://github.com/digitaljohn/comfyui-propost": [ + [ + "ProPostApplyLUT", + "ProPostDepthMapBlur", + "ProPostFilmGrain", + "ProPostRadialBlur", + "ProPostVignette" + ], + { + "title_aux": "ComfyUI-ProPost" + } + ], + "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector": [ + [ + "PixelArtAddDitherPattern", + "PixelArtDetectorConverter", + "PixelArtDetectorSave", + "PixelArtDetectorToImage", + "PixelArtLoadPalettes" + ], + { + "title_aux": "ComfyUI PixelArt Detector" + } + ], + "https://github.com/diontimmer/ComfyUI-Vextra-Nodes": [ + [ + "Add Text To Image", + "Apply Instagram Filter", + "Create Solid Color", + "Flatten Colors", + "Generate Noise Image", + "GlitchThis Effect", + "Hue Rotation", + "Load Picture Index", + "Pixel Sort", + "Play Sound At Execution", + "Prettify Prompt Using distilgpt2", + "Swap Color Mode" + ], + { + "title_aux": "ComfyUI-Vextra-Nodes" + } + ], + "https://github.com/djbielejeski/a-person-mask-generator": [ + [ + "APersonMaskGenerator" + ], + { + "title_aux": "a-person-mask-generator" + } + ], + "https://github.com/dmarx/ComfyUI-AudioReactive": [ + [ + "OpAbs", + "OpBandpass", + "OpClamp", + "OpHarmonic", + "OpModulo", + "OpNormalize", + "OpNovelty", + "OpPercussive", + "OpPow", + "OpPow2", + "OpPredominant_pulse", + "OpQuantize", + "OpRms", + "OpSmoosh", + "OpSmooth", + "OpSqrt", + "OpStretch", + "OpSustain", + "OpThreshold" + ], + { + "title_aux": "ComfyUI-AudioReactive" + } + ], + "https://github.com/dmarx/ComfyUI-Keyframed": [ + [ + "Example", + "KfAddCurveToPGroup", + "KfAddCurveToPGroupx10", + "KfApplyCurveToCond", + "KfConditioningAdd", + "KfConditioningAddx10", + "KfCurveConstant", + "KfCurveDraw", + "KfCurveFromString", + "KfCurveFromYAML", + "KfCurveInverse", + "KfCurveToAcnLatentKeyframe", + "KfCurvesAdd", + "KfCurvesAddx10", + "KfCurvesDivide", + "KfCurvesMultiply", + "KfCurvesMultiplyx10", + "KfCurvesSubtract", + "KfDebug_Clip", + "KfDebug_Cond", + "KfDebug_Curve", + "KfDebug_Float", + "KfDebug_Image", + "KfDebug_Int", + "KfDebug_Latent", + "KfDebug_Model", + "KfDebug_Passthrough", + "KfDebug_Segs", + "KfDebug_String", + "KfDebug_Vae", + "KfDrawSchedule", + "KfEvaluateCurveAtT", + "KfGetCurveFromPGroup", + "KfGetScheduleConditionAtTime", + "KfGetScheduleConditionSlice", + "KfKeyframedCondition", + "KfKeyframedConditionWithText", + "KfPGroupCurveAdd", + "KfPGroupCurveMultiply", + "KfPGroupDraw", + "KfPGroupProd", + "KfPGroupSum", + "KfSetCurveLabel", + "KfSetKeyframe", + "KfSinusoidalAdjustAmplitude", + "KfSinusoidalAdjustFrequency", + "KfSinusoidalAdjustPhase", + "KfSinusoidalAdjustWavelength", + "KfSinusoidalEntangledZeroOneFromFrequencyx2", + "KfSinusoidalEntangledZeroOneFromFrequencyx3", + "KfSinusoidalEntangledZeroOneFromFrequencyx4", + "KfSinusoidalEntangledZeroOneFromFrequencyx5", + "KfSinusoidalEntangledZeroOneFromFrequencyx6", + "KfSinusoidalEntangledZeroOneFromFrequencyx7", + "KfSinusoidalEntangledZeroOneFromFrequencyx8", + "KfSinusoidalEntangledZeroOneFromFrequencyx9", + "KfSinusoidalEntangledZeroOneFromWavelengthx2", + "KfSinusoidalEntangledZeroOneFromWavelengthx3", + "KfSinusoidalEntangledZeroOneFromWavelengthx4", + "KfSinusoidalEntangledZeroOneFromWavelengthx5", + "KfSinusoidalEntangledZeroOneFromWavelengthx6", + "KfSinusoidalEntangledZeroOneFromWavelengthx7", + "KfSinusoidalEntangledZeroOneFromWavelengthx8", + "KfSinusoidalEntangledZeroOneFromWavelengthx9", + "KfSinusoidalGetAmplitude", + "KfSinusoidalGetFrequency", + "KfSinusoidalGetPhase", + "KfSinusoidalGetWavelength", + "KfSinusoidalWithFrequency", + "KfSinusoidalWithWavelength" + ], + { + "title_aux": "ComfyUI-Keyframed" + } + ], + "https://github.com/drago87/ComfyUI_Dragos_Nodes": [ + [ + "file_padding", + "image_info", + "lora_loader", + "vae_loader" + ], + { + "title_aux": "ComfyUI_Dragos_Nodes" + } + ], + "https://github.com/drustan-hawk/primitive-types": [ + [ + "float", + "int", + "string", + "string_multiline" + ], + { + "title_aux": "primitive-types" + } + ], + "https://github.com/ealkanat/comfyui_easy_padding": [ + [ + "comfyui-easy-padding" + ], + { + "title_aux": "ComfyUI Easy Padding" + } + ], + "https://github.com/edenartlab/eden_comfy_pipelines": [ + [ + "CLIP_Interrogator", + "Eden_Bool", + "Eden_Compare", + "Eden_DebugPrint", + "Eden_Float", + "Eden_Int", + "Eden_String", + "Filepicker", + "IMG_blender", + "IMG_padder", + "IMG_scaler", + "IMG_unpadder", + "If ANY execute A else B", + "LatentTypeConversion", + "SaveImageAdvanced", + "VAEDecode_to_folder" + ], + { + "title_aux": "eden_comfy_pipelines" + } + ], + "https://github.com/evanspearman/ComfyMath": [ + [ + "CM_BoolBinaryOperation", + "CM_BoolToInt", + "CM_BoolUnaryOperation", + "CM_BreakoutVec2", + "CM_BreakoutVec3", + "CM_BreakoutVec4", + "CM_ComposeVec2", + "CM_ComposeVec3", + "CM_ComposeVec4", + "CM_FloatBinaryCondition", + "CM_FloatBinaryOperation", + "CM_FloatToInt", + "CM_FloatToNumber", + "CM_FloatUnaryCondition", + "CM_FloatUnaryOperation", + "CM_IntBinaryCondition", + "CM_IntBinaryOperation", + "CM_IntToBool", + "CM_IntToFloat", + "CM_IntToNumber", + "CM_IntUnaryCondition", + "CM_IntUnaryOperation", + "CM_NearestSDXLResolution", + "CM_NumberBinaryCondition", + "CM_NumberBinaryOperation", + "CM_NumberToFloat", + "CM_NumberToInt", + "CM_NumberUnaryCondition", + "CM_NumberUnaryOperation", + "CM_SDXLResolution", + "CM_Vec2BinaryCondition", + "CM_Vec2BinaryOperation", + "CM_Vec2ScalarOperation", + "CM_Vec2ToScalarBinaryOperation", + "CM_Vec2ToScalarUnaryOperation", + "CM_Vec2UnaryCondition", + "CM_Vec2UnaryOperation", + "CM_Vec3BinaryCondition", + "CM_Vec3BinaryOperation", + "CM_Vec3ScalarOperation", + "CM_Vec3ToScalarBinaryOperation", + "CM_Vec3ToScalarUnaryOperation", + "CM_Vec3UnaryCondition", + "CM_Vec3UnaryOperation", + "CM_Vec4BinaryCondition", + "CM_Vec4BinaryOperation", + "CM_Vec4ScalarOperation", + "CM_Vec4ToScalarBinaryOperation", + "CM_Vec4ToScalarUnaryOperation", + "CM_Vec4UnaryCondition", + "CM_Vec4UnaryOperation" + ], + { + "title_aux": "ComfyMath" + } + ], + "https://github.com/fearnworks/ComfyUI_FearnworksNodes/raw/main/fw_nodes.py": [ + [ + "Count Files in Directory (FW)", + "Count Tokens (FW)", + "Token Count Ranker(FW)", + "Trim To Tokens (FW)" + ], + { + "title_aux": "Fearnworks Custom Nodes" + } + ], + "https://github.com/fexli/fexli-util-node-comfyui": [ + [ + "FEBCPrompt", + "FEBatchGenStringBCDocker", + "FEColor2Image", + "FEColorOut", + "FEDataInsertor", + "FEDataPacker", + "FEDataUnpacker", + "FEDeepClone", + "FEDictPacker", + "FEDictUnpacker", + "FEEncLoraLoader", + "FEExtraInfoAdd", + "FEGenStringBCDocker", + "FEGenStringGPT", + "FEImageNoiseGenerate", + "FEImagePadForOutpaint", + "FEImagePadForOutpaintByImage", + "FEOperatorIf", + "FEPythonStrOp", + "FERandomLoraSelect", + "FERandomPrompt", + "FERandomizedColor2Image", + "FERandomizedColorOut", + "FERerouteWithName", + "FESaveEncryptImage", + "FETextCombine", + "FETextInput" + ], + { + "title_aux": "fexli-util-node-comfyui" + } + ], + "https://github.com/filipemeneses/comfy_pixelization": [ + [ + "Pixelization" + ], + { + "title_aux": "Pixelization" + } + ], + "https://github.com/filliptm/ComfyUI_Fill-Nodes": [ + [ + "FL_ImageCaptionSaver", + "FL_ImageRandomizer" + ], + { + "title_aux": "ComfyUI_Fill-Nodes" + } + ], + "https://github.com/fitCorder/fcSuite/raw/main/fcSuite.py": [ + [ + "fcFloat", + "fcFloatMatic", + "fcHex", + "fcInteger" + ], + { + "title_aux": "fcSuite" + } + ], + "https://github.com/florestefano1975/comfyui-portrait-master": [ + [ + "PortraitMaster" + ], + { + "title_aux": "comfyui-portrait-master" + } + ], + "https://github.com/florestefano1975/comfyui-prompt-composer": [ + [ + "PromptComposerCustomLists", + "PromptComposerEffect", + "PromptComposerGrouping", + "PromptComposerMerge", + "PromptComposerStyler", + "PromptComposerTextSingle", + "promptComposerTextMultiple" + ], + { + "title_aux": "comfyui-prompt-composer" + } + ], + "https://github.com/flowtyone/ComfyUI-Flowty-LDSR": [ + [ + "LDSRModelLoader", + "LDSRUpscale", + "LDSRUpscaler" + ], + { + "title_aux": "ComfyUI-Flowty-LDSR" + } + ], + "https://github.com/flyingshutter/As_ComfyUI_CustomNodes": [ + [ + "BatchIndex_AS", + "CropImage_AS", + "ImageMixMasked_As", + "ImageToMask_AS", + "Increment_AS", + "Int2Any_AS", + "LatentAdd_AS", + "LatentMixMasked_As", + "LatentMix_AS", + "LatentToImages_AS", + "LoadLatent_AS", + "MapRange_AS", + "MaskToImage_AS", + "Math_AS", + "NoiseImage_AS", + "Number2Float_AS", + "Number2Int_AS", + "Number_AS", + "SaveLatent_AS", + "TextToImage_AS", + "TextWildcardList_AS" + ], + { + "title_aux": "As_ComfyUI_CustomNodes" + } + ], + "https://github.com/foxtrot-roger/comfyui-rf-nodes": [ + [ + "LogBool", + "LogFloat", + "LogInt", + "LogNumber", + "LogString", + "LogVec2", + "LogVec3", + "RF_AtIndexString", + "RF_BoolToString", + "RF_FloatToString", + "RF_IntToString", + "RF_JsonStyleLoader", + "RF_MergeLines", + "RF_NumberToString", + "RF_OptionsString", + "RF_RangeFloat", + "RF_RangeInt", + "RF_RangeNumber", + "RF_SavePromptInfo", + "RF_SplitLines", + "RF_TextConcatenate", + "RF_TextInput", + "RF_TextReplace", + "RF_Timestamp", + "RF_ToString", + "RF_Vec2ToString", + "RF_Vec3ToString", + "TextLine" + ], + { + "title_aux": "RF Nodes" + } + ], + "https://github.com/gemell1/ComfyUI_GMIC": [ + [ + "GmicCliWrapper" + ], + { + "title_aux": "ComfyUI_GMIC" + } + ], + "https://github.com/giriss/comfy-image-saver": [ + [ + "Cfg Literal", + "Checkpoint Selector", + "Int Literal", + "Sampler Selector", + "Save Image w/Metadata", + "Scheduler Selector", + "Seed Generator", + "String Literal", + "Width/Height Literal" + ], + { + "title_aux": "Save Image with Generation Metadata" + } + ], + "https://github.com/glibsonoran/Plush-for-ComfyUI": [ + [ + "DalleImage", + "Enhancer", + "ImgTextSwitch", + "Plush-Exif Wrangler", + "mulTextSwitch" + ], + { + "title_aux": "Plush-for-ComfyUI" + } + ], + "https://github.com/glifxyz/ComfyUI-GlifNodes": [ + [ + "GlifConsistencyDecoder", + "GlifPatchConsistencyDecoderTiled", + "SDXLAspectRatio" + ], + { + "title_aux": "ComfyUI-GlifNodes" + } + ], + "https://github.com/glowcone/comfyui-base64-to-image": [ + [ + "LoadImageFromBase64" + ], + { + "title_aux": "Load Image From Base64 URI" + } + ], + "https://github.com/godspede/ComfyUI_Substring": [ + [ + "SubstringTheory" + ], + { + "title_aux": "ComfyUI Substring" + } + ], + "https://github.com/gokayfem/ComfyUI_VLM_nodes": [ + [ + "Joytag", + "JsonToText", + "KeywordExtraction", + "LLMLoader", + "LLMPromptGenerator", + "LLMSampler", + "LLava Loader Simple", + "LLavaPromptGenerator", + "LLavaSamplerAdvanced", + "LLavaSamplerSimple", + "LlavaClipLoader", + "MoonDream", + "PromptGenerateAPI", + "SimpleText", + "Suggester", + "ViewText" + ], + { + "title_aux": "VLM_nodes" + } + ], + "https://github.com/guoyk93/yk-node-suite-comfyui": [ + [ + "YKImagePadForOutpaint", + "YKMaskToImage" + ], + { + "title_aux": "y.k.'s ComfyUI node suite" + } + ], + "https://github.com/hhhzzyang/Comfyui_Lama": [ + [ + "LamaApply", + "LamaModelLoader", + "YamlConfigLoader" + ], + { + "title_aux": "Comfyui-Lama" + } + ], + "https://github.com/hinablue/ComfyUI_3dPoseEditor": [ + [ + "Hina.PoseEditor3D" + ], + { + "title_aux": "ComfyUI 3D Pose Editor" + } + ], + "https://github.com/hustille/ComfyUI_Fooocus_KSampler": [ + [ + "KSampler With Refiner (Fooocus)" + ], + { + "title_aux": "ComfyUI_Fooocus_KSampler" + } + ], + "https://github.com/hustille/ComfyUI_hus_utils": [ + [ + "3way Prompt Styler", + "Batch State", + "Date Time Format", + "Debug Extra", + "Fetch widget value", + "Text Hash" + ], + { + "title_aux": "hus' utils for ComfyUI" + } + ], + "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo": [ + [ + "EagleImageNode", + "SDXLPromptStyler", + "SDXLPromptStylerAdvanced", + "SDXLResolutionPresets" + ], + { + "title_aux": "Eagle PNGInfo" + } + ], + "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words": [ + [ + "FusionText", + "LoraListNames", + "LoraLoaderAdvanced", + "LoraLoaderStackedAdvanced", + "LoraLoaderStackedVanilla", + "LoraLoaderVanilla", + "LoraTagsOnly", + "Randomizer", + "TagsFormater", + "TagsSelector", + "TextInputBasic" + ], + { + "title_aux": "ComfyUI-Lora-Auto-Trigger-Words" + } + ], + "https://github.com/imb101/ComfyUI-FaceSwap": [ + [ + "FaceSwapNode" + ], + { + "title_aux": "FaceSwap" + } + ], + "https://github.com/jags111/ComfyUI_Jags_Audiotools": [ + [ + "BatchJoinAudio", + "BatchToList", + "BitCrushAudioFX", + "BulkVariation", + "ChorusAudioFX", + "ClippingAudioFX", + "CompressorAudioFX", + "ConcatAudioList", + "ConvolutionAudioFX", + "CutAudio", + "DelayAudioFX", + "DistortionAudioFX", + "DuplicateAudio", + "GainAudioFX", + "GenerateAudioSample", + "GenerateAudioWave", + "GetAudioFromFolderIndex", + "GetSingle", + "GetStringByIndex", + "HighShelfFilter", + "HighpassFilter", + "ImageToSpectral", + "InvertAudioFX", + "JoinAudio", + "LadderFilter", + "LimiterAudioFX", + "ListToBatch", + "LoadAudioDir", + "LoadAudioFile", + "LoadAudioModel (DD)", + "LoadVST3", + "LowShelfFilter", + "LowpassFilter", + "MP3CompressorAudioFX", + "MixAudioTensors", + "NoiseGateAudioFX", + "OTTAudioFX", + "PeakFilter", + "PhaserEffectAudioFX", + "PitchShiftAudioFX", + "PlotSpectrogram", + "PreviewAudioFile", + "PreviewAudioTensor", + "ResampleAudio", + "ReverbAudioFX", + "ReverseAudio", + "SaveAudioTensor", + "SequenceVariation", + "SliceAudio", + "SoundPlayer", + "StretchAudio", + "samplerate" + ], + { + "author": "jags111", + "description": "This extension offers various audio generation tools", + "nickname": "Audiotools", + "title": "Jags_Audiotools", + "title_aux": "ComfyUI_Jags_Audiotools" + } + ], + "https://github.com/jags111/ComfyUI_Jags_VectorMagic": [ + [ + "CircularVAEDecode", + "JagsCLIPSeg", + "JagsClipseg", + "JagsCombineMasks", + "SVG", + "YoloSEGdetectionNode", + "YoloSegNode", + "color_drop", + "my unique name", + "xy_Tiling_KSampler" + ], + { + "author": "jags111", + "description": "This extension offers various vector manipulation and generation tools", + "nickname": "Jags_VectorMagic", + "title": "Jags_VectorMagic", + "title_aux": "ComfyUI_Jags_VectorMagic" + } + ], + "https://github.com/jags111/efficiency-nodes-comfyui": [ + [ + "AnimateDiff Script", + "Apply ControlNet Stack", + "Control Net Stacker", + "Eff. Loader SDXL", + "Efficient Loader", + "HighRes-Fix Script", + "Image Overlay", + "Join XY Inputs of Same Type", + "KSampler (Efficient)", + "KSampler Adv. (Efficient)", + "KSampler SDXL (Eff.)", + "LatentUpscaler", + "LoRA Stack to String converter", + "LoRA Stacker", + "Manual XY Entry Info", + "NNLatentUpscale", + "Noise Control Script", + "Pack SDXL Tuple", + "Tiled Upscaler Script", + "Unpack SDXL Tuple", + "XY Input: Add/Return Noise", + "XY Input: Aesthetic Score", + "XY Input: CFG Scale", + "XY Input: Checkpoint", + "XY Input: Clip Skip", + "XY Input: Control Net", + "XY Input: Control Net Plot", + "XY Input: Denoise", + "XY Input: LoRA", + "XY Input: LoRA Plot", + "XY Input: LoRA Stacks", + "XY Input: Manual XY Entry", + "XY Input: Prompt S/R", + "XY Input: Refiner On/Off", + "XY Input: Sampler/Scheduler", + "XY Input: Seeds++ Batch", + "XY Input: Steps", + "XY Input: VAE", + "XY Plot" + ], + { + "title_aux": "Efficiency Nodes for ComfyUI Version 2.0+" + } + ], + "https://github.com/jamal-alkharrat/ComfyUI_rotate_image": [ + [ + "RotateImage" + ], + { + "title_aux": "ComfyUI_rotate_image" + } + ], + "https://github.com/jamesWalker55/comfyui-various": [ + [], + { + "nodename_pattern": "^JW", + "title_aux": "Various ComfyUI Nodes by Type" + } + ], + "https://github.com/jesenzhang/ComfyUI_StreamDiffusion": [ + [ + "StreamDiffusion_Loader", + "StreamDiffusion_Sampler" + ], + { + "title_aux": "ComfyUI_StreamDiffusion" + } + ], + "https://github.com/jitcoder/lora-info": [ + [ + "ImageFromURL", + "LoraInfo" + ], + { + "title_aux": "LoraInfo" + } + ], + "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes": [ + [ + "JjkConcat", + "JjkShowText", + "JjkText", + "SDXLRecommendedImageSize" + ], + { + "title_aux": "ComfyUI-Jjk-Nodes" + } + ], + "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative": [ + [ + "LCMScheduler", + "SamplerLCMAlternative", + "SamplerLCMCycle" + ], + { + "title_aux": "ComfyUI-sampler-lcm-alternative" + } + ], + "https://github.com/jordoh/ComfyUI-Deepface": [ + [ + "DeepfaceExtractFaces", + "DeepfaceVerify" + ], + { + "title_aux": "ComfyUI Deepface" + } + ], + "https://github.com/jtrue/ComfyUI-JaRue": [ + [ + "Text2Image_jru", + "YouTube2Prompt_jru" + ], + { + "nodename_pattern": "_jru$", + "title_aux": "ComfyUI-JaRue" + } + ], + "https://github.com/ka-puna/comfyui-yanc": [ + [ + "YANC.ConcatStrings", + "YANC.FormatDatetimeString", + "YANC.GetWidgetValueString", + "YANC.IntegerCaster", + "YANC.MultilineString", + "YANC.TruncateString" + ], + { + "title_aux": "comfyui-yanc" + } + ], + "https://github.com/kadirnar/ComfyUI-Transformers": [ + [ + "DepthEstimationPipeline", + "ImageClassificationPipeline", + "ImageSegmentationPipeline", + "ObjectDetectionPipeline" + ], + { + "title_aux": "ComfyUI-Transformers" + } + ], + "https://github.com/kenjiqq/qq-nodes-comfyui": [ + [ + "Any List", + "Axis Pack", + "Axis Unpack", + "Image Accumulator End", + "Image Accumulator Start", + "Load Lines From Text File", + "Slice List", + "Text Splitter", + "XY Grid Helper" + ], + { + "title_aux": "qq-nodes-comfyui" + } + ], + "https://github.com/kft334/Knodes": [ + [ + "Image(s) To Websocket (Base64)", + "ImageOutput", + "Load Image (Base64)", + "Load Images (Base64)" + ], + { + "title_aux": "Knodes" + } + ], + "https://github.com/kijai/ComfyUI-CCSR": [ + [ + "CCSR_Model_Select", + "CCSR_Upscale" + ], + { + "title_aux": "ComfyUI-CCSR" + } + ], + "https://github.com/kijai/ComfyUI-DDColor": [ + [ + "DDColor_Colorize" + ], + { + "title_aux": "ComfyUI-DDColor" + } + ], + "https://github.com/kijai/ComfyUI-KJNodes": [ + [ + "AddLabel", + "BatchCLIPSeg", + "BatchCropFromMask", + "BatchCropFromMaskAdvanced", + "BatchUncrop", + "BatchUncropAdvanced", + "BboxToInt", + "ColorMatch", + "ColorToMask", + "CondPassThrough", + "ConditioningMultiCombine", + "ConditioningSetMaskAndCombine", + "ConditioningSetMaskAndCombine3", + "ConditioningSetMaskAndCombine4", + "ConditioningSetMaskAndCombine5", + "CreateAudioMask", + "CreateFadeMask", + "CreateFadeMaskAdvanced", + "CreateFluidMask", + "CreateGradientMask", + "CreateMagicMask", + "CreateShapeMask", + "CreateTextMask", + "CreateVoronoiMask", + "CrossFadeImages", + "DummyLatentOut", + "EffnetEncode", + "EmptyLatentImagePresets", + "FilterZeroMasksAndCorrespondingImages", + "FlipSigmasAdjusted", + "FloatConstant", + "GLIGENTextBoxApplyBatch", + "GenerateNoise", + "GetImageRangeFromBatch", + "GetImagesFromBatchIndexed", + "GetLatentsFromBatchIndexed", + "GrowMaskWithBlur", + "INTConstant", + "ImageBatchRepeatInterleaving", + "ImageBatchTestPattern", + "ImageConcanate", + "ImageGrabPIL", + "ImageGridComposite2x2", + "ImageGridComposite3x3", + "ImageTransformByNormalizedAmplitude", + "ImageUpscaleWithModelBatched", + "InjectNoiseToLatent", + "InsertImageBatchByIndexes", + "NormalizeLatent", + "NormalizedAmplitudeToMask", + "OffsetMask", + "OffsetMaskByNormalizedAmplitude", + "ReferenceOnlySimple3", + "ReplaceImagesInBatch", + "ResizeMask", + "ReverseImageBatch", + "RoundMask", + "SaveImageWithAlpha", + "ScaleBatchPromptSchedule", + "SomethingToString", + "SoundReactive", + "SplitBboxes", + "StableZero123_BatchSchedule", + "StringConstant", + "VRAM_Debug", + "WidgetToString" + ], + { + "title_aux": "KJNodes for ComfyUI" + } + ], + "https://github.com/kijai/ComfyUI-Marigold": [ + [ + "ColorizeDepthmap", + "MarigoldDepthEstimation", + "RemapDepth", + "SaveImageOpenEXR" + ], + { + "title_aux": "Marigold depth estimation in ComfyUI" + } + ], + "https://github.com/kijai/ComfyUI-SVD": [ + [ + "SVDimg2vid" + ], + { + "title_aux": "ComfyUI-SVD" + } + ], + "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink": [ + [ + "GradientPatchModelAddDownscale", + "GradientPatchModelAddDownscaleAdvanced" + ], + { + "title_aux": "ComfyUI_GradientDeepShrink" + } + ], + "https://github.com/kinfolk0117/ComfyUI_Pilgram": [ + [ + "Pilgram" + ], + { + "title_aux": "ComfyUI_Pilgram" + } + ], + "https://github.com/kinfolk0117/ComfyUI_SimpleTiles": [ + [ + "DynamicTileMerge", + "DynamicTileSplit", + "TileCalc", + "TileMerge", + "TileSplit" + ], + { + "title_aux": "SimpleTiles" + } + ], + "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter": [ + [ + "TiledIPAdapter" + ], + { + "title_aux": "TiledIPAdapter" + } + ], + "https://github.com/knuknX/ComfyUI-Image-Tools": [ + [ + "BatchImagePathLoader", + "ImageBgRemoveProcessor", + "ImageCheveretoUploader", + "ImageStandardResizeProcessor", + "JSONMessageNotifyTool", + "PreviewJSONNode", + "SingleImagePathLoader", + "SingleImageUrlLoader" + ], + { + "title_aux": "ComfyUI-Image-Tools" + } + ], + "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI": [ + [ + "LLLiteLoader" + ], + { + "title_aux": "ControlNet-LLLite-ComfyUI" + } + ], + "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes": [ + [ + "S3 Bucket LoRA", + "S3Bucket_Load_LoRA", + "XL DreamBooth LoRA", + "XLDB_LoRA" + ], + { + "title_aux": "ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes" + } + ], + "https://github.com/komojini/komojini-comfyui-nodes": [ + [ + "BatchCreativeInterpolationNodeDynamicSettings", + "CachedGetter", + "DragNUWAImageCanvas", + "FlowBuilder", + "FlowBuilder (adv)", + "FlowBuilder (advanced)", + "FlowBuilder (advanced) Setter", + "FlowBuilderSetter", + "FlowBuilderSetter (adv)", + "Getter", + "ImageCropByRatio", + "ImageCropByRatioAndResize", + "ImageGetter", + "ImageMerger", + "ImagesCropByRatioAndResizeBatch", + "KSamplerAdvancedCacheable", + "KSamplerCacheable", + "Setter", + "UltimateVideoLoader", + "UltimateVideoLoader (simple)", + "YouTubeVideoLoader" + ], + { + "title_aux": "komojini-comfyui-nodes" + } + ], + "https://github.com/kwaroran/abg-comfyui": [ + [ + "Remove Image Background (abg)" + ], + { + "title_aux": "abg-comfyui" + } + ], + "https://github.com/laksjdjf/LCMSampler-ComfyUI": [ + [ + "SamplerLCM", + "TAESDLoader" + ], + { + "title_aux": "LCMSampler-ComfyUI" + } + ], + "https://github.com/laksjdjf/LoRA-Merger-ComfyUI": [ + [ + "LoraLoaderFromWeight", + "LoraLoaderWeightOnly", + "LoraMerge", + "LoraSave" + ], + { + "title_aux": "LoRA-Merger-ComfyUI" + } + ], + "https://github.com/laksjdjf/attention-couple-ComfyUI": [ + [ + "Attention couple" + ], + { + "title_aux": "attention-couple-ComfyUI" + } + ], + "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI": [ + [ + "CDTuner", + "Negapip", + "Negpip" + ], + { + "title_aux": "cd-tuner_negpip-ComfyUI" + } + ], + "https://github.com/laksjdjf/pfg-ComfyUI": [ + [ + "PFG" + ], + { + "title_aux": "pfg-ComfyUI" + } + ], + "https://github.com/lilly1987/ComfyUI_node_Lilly": [ + [ + "CheckpointLoaderSimpleText", + "LoraLoaderText", + "LoraLoaderTextRandom", + "Random_Sampler", + "VAELoaderDecode" + ], + { + "title_aux": "simple wildcard for ComfyUI" + } + ], + "https://github.com/lldacing/comfyui-easyapi-nodes": [ + [ + "Base64ToImage", + "Base64ToMask", + "ImageToBase64", + "ImageToBase64Advanced", + "LoadImageFromURL", + "LoadImageToBase64", + "LoadMaskFromURL", + "MaskImageToBase64", + "MaskToBase64", + "MaskToBase64Image", + "SamAutoMaskSEGS" + ], + { + "title_aux": "comfyui-easyapi-nodes" + } + ], + "https://github.com/longgui0318/comfyui-mask-util": [ + [ + "Mask Region Info", + "Mask Selection Of Masks", + "Split Masks" + ], + { + "title_aux": "comfyui-mask-util" + } + ], + "https://github.com/lordgasmic/ComfyUI-Wildcards/raw/master/wildcards.py": [ + [ + "CLIPTextEncodeWithWildcards" + ], + { + "title_aux": "Wildcards" + } + ], + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/SDXLMixSampler.py": [ + [ + "SDXLMixSampler" + ], + { + "title_aux": "ComfyUIJasonNode" + } + ], + "https://github.com/ltdrdata/ComfyUI-Impact-Pack": [ + [ + "AddMask", + "BasicPipeToDetailerPipe", + "BasicPipeToDetailerPipeSDXL", + "BboxDetectorCombined", + "BboxDetectorCombined_v2", + "BboxDetectorForEach", + "BboxDetectorSEGS", + "BitwiseAndMask", + "BitwiseAndMaskForEach", + "CLIPSegDetectorProvider", + "CfgScheduleHookProvider", + "CombineRegionalPrompts", + "CoreMLDetailerHookProvider", + "DenoiseScheduleHookProvider", + "DenoiseSchedulerDetailerHookProvider", + "DetailerForEach", + "DetailerForEachDebug", + "DetailerForEachDebugPipe", + "DetailerForEachPipe", + "DetailerForEachPipeForAnimateDiff", + "DetailerHookCombine", + "DetailerPipeToBasicPipe", + "EditBasicPipe", + "EditDetailerPipe", + "EditDetailerPipeSDXL", + "EmptySegs", + "FaceDetailer", + "FaceDetailerPipe", + "FromBasicPipe", + "FromBasicPipe_v2", + "FromDetailerPipe", + "FromDetailerPipeSDXL", + "FromDetailerPipe_v2", + "ImageListToImageBatch", + "ImageMaskSwitch", + "ImageReceiver", + "ImageSender", + "ImpactAssembleSEGS", + "ImpactCombineConditionings", + "ImpactCompare", + "ImpactConcatConditionings", + "ImpactConditionalBranch", + "ImpactConditionalBranchSelMode", + "ImpactConditionalStopIteration", + "ImpactControlBridge", + "ImpactControlNetApplyAdvancedSEGS", + "ImpactControlNetApplySEGS", + "ImpactControlNetClearSEGS", + "ImpactConvertDataType", + "ImpactDecomposeSEGS", + "ImpactDilateMask", + "ImpactDilateMaskInSEGS", + "ImpactDilate_Mask_SEG_ELT", + "ImpactDummyInput", + "ImpactEdit_SEG_ELT", + "ImpactFloat", + "ImpactFrom_SEG_ELT", + "ImpactGaussianBlurMask", + "ImpactGaussianBlurMaskInSEGS", + "ImpactHFTransformersClassifierProvider", + "ImpactIfNone", + "ImpactImageBatchToImageList", + "ImpactImageInfo", + "ImpactInt", + "ImpactInversedSwitch", + "ImpactIsNotEmptySEGS", + "ImpactKSamplerAdvancedBasicPipe", + "ImpactKSamplerBasicPipe", + "ImpactLatentInfo", + "ImpactLogger", + "ImpactLogicalOperators", + "ImpactMakeImageBatch", + "ImpactMakeImageList", + "ImpactMakeTileSEGS", + "ImpactMinMax", + "ImpactNeg", + "ImpactNodeSetMuteState", + "ImpactQueueTrigger", + "ImpactQueueTriggerCountdown", + "ImpactRemoteBoolean", + "ImpactRemoteInt", + "ImpactSEGSClassify", + "ImpactSEGSConcat", + "ImpactSEGSLabelFilter", + "ImpactSEGSOrderedFilter", + "ImpactSEGSPicker", + "ImpactSEGSRangeFilter", + "ImpactSEGSToMaskBatch", + "ImpactSEGSToMaskList", + "ImpactScaleBy_BBOX_SEG_ELT", + "ImpactSegsAndMask", + "ImpactSegsAndMaskForEach", + "ImpactSetWidgetValue", + "ImpactSimpleDetectorSEGS", + "ImpactSimpleDetectorSEGSPipe", + "ImpactSimpleDetectorSEGS_for_AD", + "ImpactSleep", + "ImpactStringSelector", + "ImpactSwitch", + "ImpactValueReceiver", + "ImpactValueSender", + "ImpactWildcardEncode", + "ImpactWildcardProcessor", + "IterativeImageUpscale", + "IterativeLatentUpscale", + "KSamplerAdvancedProvider", + "KSamplerProvider", + "LatentPixelScale", + "LatentReceiver", + "LatentSender", + "LatentSwitch", + "MMDetDetectorProvider", + "MMDetLoader", + "MaskDetailerPipe", + "MaskListToMaskBatch", + "MaskPainter", + "MaskToSEGS", + "MaskToSEGS_for_AnimateDiff", + "MasksToMaskList", + "MediaPipeFaceMeshToSEGS", + "NoiseInjectionDetailerHookProvider", + "NoiseInjectionHookProvider", + "ONNXDetectorProvider", + "ONNXDetectorSEGS", + "PixelKSampleHookCombine", + "PixelKSampleUpscalerProvider", + "PixelKSampleUpscalerProviderPipe", + "PixelTiledKSampleUpscalerProvider", + "PixelTiledKSampleUpscalerProviderPipe", + "PreviewBridge", + "PreviewBridgeLatent", + "PreviewDetailerHookProvider", + "ReencodeLatent", + "ReencodeLatentPipe", + "RegionalPrompt", + "RegionalSampler", + "RegionalSamplerAdvanced", + "RemoveImageFromSEGS", + "RemoveNoiseMask", + "SAMDetectorCombined", + "SAMDetectorSegmented", + "SAMLoader", + "SEGSDetailer", + "SEGSDetailerForAnimateDiff", + "SEGSLabelFilterDetailerHookProvider", + "SEGSOrderedFilterDetailerHookProvider", + "SEGSPaste", + "SEGSPreview", + "SEGSPreviewCNet", + "SEGSRangeFilterDetailerHookProvider", + "SEGSSwitch", + "SEGSToImageList", + "SegmDetectorCombined", + "SegmDetectorCombined_v2", + "SegmDetectorForEach", + "SegmDetectorSEGS", + "Segs Mask", + "Segs Mask ForEach", + "SegsMaskCombine", + "SegsToCombinedMask", + "SetDefaultImageForSEGS", + "StepsScheduleHookProvider", + "SubtractMask", + "SubtractMaskForEach", + "TiledKSamplerProvider", + "ToBasicPipe", + "ToBinaryMask", + "ToDetailerPipe", + "ToDetailerPipeSDXL", + "TwoAdvancedSamplersForMask", + "TwoSamplersForMask", + "TwoSamplersForMaskUpscalerProvider", + "TwoSamplersForMaskUpscalerProviderPipe", + "UltralyticsDetectorProvider", + "UnsamplerDetailerHookProvider", + "UnsamplerHookProvider" + ], + { + "author": "Dr.Lt.Data", + "description": "This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler.", + "nickname": "Impact Pack", + "title": "Impact Pack", + "title_aux": "ComfyUI Impact Pack" + } + ], + "https://github.com/ltdrdata/ComfyUI-Inspire-Pack": [ + [ + "AnimeLineArt_Preprocessor_Provider_for_SEGS //Inspire", + "ApplyRegionalIPAdapters //Inspire", + "BindImageListPromptList //Inspire", + "CLIPTextEncodeWithWeight //Inspire", + "CacheBackendData //Inspire", + "CacheBackendDataList //Inspire", + "CacheBackendDataNumberKey //Inspire", + "CacheBackendDataNumberKeyList //Inspire", + "Canny_Preprocessor_Provider_for_SEGS //Inspire", + "ChangeImageBatchSize //Inspire", + "CheckpointLoaderSimpleShared //Inspire", + "Color_Preprocessor_Provider_for_SEGS //Inspire", + "ConcatConditioningsWithMultiplier //Inspire", + "DWPreprocessor_Provider_for_SEGS //Inspire", + "FakeScribblePreprocessor_Provider_for_SEGS //Inspire", + "FloatRange //Inspire", + "FromIPAdapterPipe //Inspire", + "GlobalSampler //Inspire", + "GlobalSeed //Inspire", + "HEDPreprocessor_Provider_for_SEGS //Inspire", + "HyperTile //Inspire", + "IPAdapterModelHelper //Inspire", + "ImageBatchSplitter //Inspire", + "InpaintPreprocessor_Provider_for_SEGS //Inspire", + "KSampler //Inspire", + "KSamplerAdvanced //Inspire", + "KSamplerAdvancedPipe //Inspire", + "KSamplerAdvancedProgress //Inspire", + "KSamplerPipe //Inspire", + "KSamplerProgress //Inspire", + "LatentBatchSplitter //Inspire", + "LeRes_DepthMap_Preprocessor_Provider_for_SEGS //Inspire", + "LineArt_Preprocessor_Provider_for_SEGS //Inspire", + "ListCounter //Inspire", + "LoadImage //Inspire", + "LoadImageListFromDir //Inspire", + "LoadImagesFromDir //Inspire", + "LoadPromptsFromDir //Inspire", + "LoadPromptsFromFile //Inspire", + "LoadSinglePromptFromFile //Inspire", + "LoraBlockInfo //Inspire", + "LoraLoaderBlockWeight //Inspire", + "MakeBasicPipe //Inspire", + "Manga2Anime_LineArt_Preprocessor_Provider_for_SEGS //Inspire", + "MediaPipeFaceMeshDetectorProvider //Inspire", + "MediaPipe_FaceMesh_Preprocessor_Provider_for_SEGS //Inspire", + "MeshGraphormerDepthMapPreprocessorProvider_for_SEGS //Inspire", + "MiDaS_DepthMap_Preprocessor_Provider_for_SEGS //Inspire", + "OpenPose_Preprocessor_Provider_for_SEGS //Inspire", + "PromptBuilder //Inspire", + "PromptExtractor //Inspire", + "RandomGeneratorForList //Inspire", + "RegionalConditioningColorMask //Inspire", + "RegionalConditioningSimple //Inspire", + "RegionalIPAdapterColorMask //Inspire", + "RegionalIPAdapterEncodedColorMask //Inspire", + "RegionalIPAdapterEncodedMask //Inspire", + "RegionalIPAdapterMask //Inspire", + "RegionalPromptColorMask //Inspire", + "RegionalPromptSimple //Inspire", + "RegionalSeedExplorerColorMask //Inspire", + "RegionalSeedExplorerMask //Inspire", + "RemoveBackendData //Inspire", + "RemoveBackendDataNumberKey //Inspire", + "RemoveControlNet //Inspire", + "RemoveControlNetFromRegionalPrompts //Inspire", + "RetrieveBackendData //Inspire", + "RetrieveBackendDataNumberKey //Inspire", + "SeedExplorer //Inspire", + "ShowCachedInfo //Inspire", + "TilePreprocessor_Provider_for_SEGS //Inspire", + "ToIPAdapterPipe //Inspire", + "UnzipPrompt //Inspire", + "WildcardEncode //Inspire", + "XY Input: Lora Block Weight //Inspire", + "ZipPrompt //Inspire", + "Zoe_DepthMap_Preprocessor_Provider_for_SEGS //Inspire" + ], + { + "author": "Dr.Lt.Data", + "description": "This extension provides various nodes to support Lora Block Weight and the Impact Pack.", + "nickname": "Inspire Pack", + "nodename_pattern": "Inspire$", + "title": "Inspire Pack", + "title_aux": "ComfyUI Inspire Pack" + } + ], + "https://github.com/m-sokes/ComfyUI-Sokes-Nodes": [ + [ + "Custom Date Format | sokes \ud83e\uddac", + "Latent Switch x9 | sokes \ud83e\uddac" + ], + { + "title_aux": "ComfyUI Sokes Nodes" + } + ], + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/clip-text-encode-split/clip_text_encode_split.py": [ + [ + "RawText", + "RawTextCombine", + "RawTextEncode", + "RawTextReplace" + ], + { + "title_aux": "m957ymj75urz/ComfyUI-Custom-Nodes" + } + ], + "https://github.com/mape/ComfyUI-mape-Helpers": [ + [ + "mape Variable" + ], + { + "author": "mape", + "description": "Various QoL improvements like prompt tweaking, variable assignment, image preview, fuzzy search, error reporting, organizing and node navigation.", + "nickname": "\ud83d\udfe1 mape's helpers", + "title": "mape's helpers", + "title_aux": "mape's ComfyUI Helpers" + } + ], + "https://github.com/marhensa/sdxl-recommended-res-calc": [ + [ + "RecommendedResCalc" + ], + { + "title_aux": "Recommended Resolution Calculator" + } + ], + "https://github.com/martijnat/comfyui-previewlatent": [ + [ + "PreviewLatent", + "PreviewLatentAdvanced", + "PreviewLatentXL" + ], + { + "title_aux": "comfyui-previewlatent" + } + ], + "https://github.com/massao000/ComfyUI_aspect_ratios": [ + [ + "Aspect Ratios Node" + ], + { + "title_aux": "ComfyUI_aspect_ratios" + } + ], + "https://github.com/matan1905/ComfyUI-Serving-Toolkit": [ + [ + "DiscordServing", + "ServingInputNumber", + "ServingInputText", + "ServingOutput", + "WebSocketServing" + ], + { + "title_aux": "ComfyUI Serving toolkit" + } + ], + "https://github.com/mav-rik/facerestore_cf": [ + [ + "CropFace", + "FaceRestoreCFWithModel", + "FaceRestoreModelLoader" + ], + { + "title_aux": "Facerestore CF (Code Former)" + } + ], + "https://github.com/mbrostami/ComfyUI-HF": [ + [ + "GPT2Node" + ], + { + "title_aux": "ComfyUI-HF" + } + ], + "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding": [ + [ + "DynamicThresholdingFull", + "DynamicThresholdingSimple" + ], + { + "title_aux": "Stable Diffusion Dynamic Thresholding (CFG Scale Fix)" + } + ], + "https://github.com/meap158/ComfyUI-Background-Replacement": [ + [ + "BackgroundReplacement", + "ImageComposite" + ], + { + "title_aux": "ComfyUI-Background-Replacement" + } + ], + "https://github.com/meap158/ComfyUI-GPU-temperature-protection": [ + [ + "GPUTemperatureProtection" + ], + { + "title_aux": "GPU temperature protection" + } + ], + "https://github.com/meap158/ComfyUI-Prompt-Expansion": [ + [ + "PromptExpansion" + ], + { + "title_aux": "ComfyUI-Prompt-Expansion" + } + ], + "https://github.com/melMass/comfy_mtb": [ + [ + "Animation Builder (mtb)", + "Any To String (mtb)", + "Batch Float (mtb)", + "Batch Float Assemble (mtb)", + "Batch Float Fill (mtb)", + "Batch Make (mtb)", + "Batch Merge (mtb)", + "Batch Shake (mtb)", + "Batch Shape (mtb)", + "Batch Transform (mtb)", + "Bbox (mtb)", + "Bbox From Mask (mtb)", + "Blur (mtb)", + "Color Correct (mtb)", + "Colored Image (mtb)", + "Concat Images (mtb)", + "Crop (mtb)", + "Debug (mtb)", + "Deep Bump (mtb)", + "Export With Ffmpeg (mtb)", + "Face Swap (mtb)", + "Film Interpolation (mtb)", + "Fit Number (mtb)", + "Float To Number (mtb)", + "Get Batch From History (mtb)", + "Image Compare (mtb)", + "Image Premultiply (mtb)", + "Image Remove Background Rembg (mtb)", + "Image Resize Factor (mtb)", + "Image Tile Offset (mtb)", + "Int To Bool (mtb)", + "Int To Number (mtb)", + "Interpolate Clip Sequential (mtb)", + "Latent Lerp (mtb)", + "Load Face Analysis Model (mtb)", + "Load Face Enhance Model (mtb)", + "Load Face Swap Model (mtb)", + "Load Film Model (mtb)", + "Load Image From Url (mtb)", + "Load Image Sequence (mtb)", + "Mask To Image (mtb)", + "Math Expression (mtb)", + "Model Patch Seamless (mtb)", + "Pick From Batch (mtb)", + "Qr Code (mtb)", + "Restore Face (mtb)", + "Save Gif (mtb)", + "Save Image Grid (mtb)", + "Save Image Sequence (mtb)", + "Save Tensors (mtb)", + "Sharpen (mtb)", + "Smart Step (mtb)", + "Stack Images (mtb)", + "String Replace (mtb)", + "Styles Loader (mtb)", + "Text To Image (mtb)", + "Transform Image (mtb)", + "Uncrop (mtb)", + "Unsplash Image (mtb)", + "Vae Decode (mtb)" + ], + { + "nodename_pattern": "\\(mtb\\)$", + "title_aux": "MTB Nodes" + } + ], + "https://github.com/mihaiiancu/ComfyUI_Inpaint": [ + [ + "InpaintMediapipe" + ], + { + "title_aux": "mihaiiancu/Inpaint" + } + ], + "https://github.com/mikkel/ComfyUI-text-overlay": [ + [ + "Image Text Overlay" + ], + { + "title_aux": "ComfyUI - Text Overlay Plugin" + } + ], + "https://github.com/mikkel/comfyui-mask-boundingbox": [ + [ + "Mask Bounding Box" + ], + { + "title_aux": "ComfyUI - Mask Bounding Box" + } + ], + "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor": [ + [ + "LaMaPreprocessor", + "lamaPreprocessor" + ], + { + "title_aux": "LaMa Preprocessor [WIP]" + } + ], + "https://github.com/modusCell/ComfyUI-dimension-node-modusCell": [ + [ + "DimensionProviderFree modusCell", + "DimensionProviderRatio modusCell", + "String Concat modusCell" + ], + { + "title_aux": "Preset Dimensions" + } + ], + "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt": [ + [ + "Save IMG Prompt" + ], + { + "title_aux": "SaveImgPrompt" + } + ], + "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL": [ + [ + "FastLatentToImage" + ], + { + "title_aux": "ComfyUI_FastVAEDecorder_SDXL" + } + ], + "https://github.com/natto-maki/ComfyUI-NegiTools": [ + [ + "NegiTools_CompositeImages", + "NegiTools_DepthEstimationByMarigold", + "NegiTools_DetectFaceRotationForInpainting", + "NegiTools_ImageProperties", + "NegiTools_LatentProperties", + "NegiTools_NoiseImageGenerator", + "NegiTools_OpenAiDalle3", + "NegiTools_OpenAiGpt", + "NegiTools_OpenAiGpt4v", + "NegiTools_OpenAiTranslate", + "NegiTools_OpenPoseToPointList", + "NegiTools_PointListToMask", + "NegiTools_RandomImageLoader", + "NegiTools_SaveImageToDirectory", + "NegiTools_SeedGenerator", + "NegiTools_StereoImageGenerator", + "NegiTools_StringFunction" + ], + { + "title_aux": "ComfyUI-NegiTools" + } + ], + "https://github.com/nicolai256/comfyUI_Nodes_nicolai256/raw/main/yugioh-presets.py": [ + [ + "yugioh_Presets" + ], + { + "title_aux": "comfyUI_Nodes_nicolai256" + } + ], + "https://github.com/ningxiaoxiao/comfyui-NDI": [ + [ + "NDI_LoadImage", + "NDI_SendImage" + ], + { + "title_aux": "comfyui-NDI" + } + ], + "https://github.com/nkchocoai/ComfyUI-PromptUtilities": [ + [ + "PromptUtilitiesConstString", + "PromptUtilitiesConstStringMultiLine", + "PromptUtilitiesFormatString", + "PromptUtilitiesJoinStringList", + "PromptUtilitiesLoadPreset", + "PromptUtilitiesLoadPresetAdvanced", + "PromptUtilitiesRandomPreset", + "PromptUtilitiesRandomPresetAdvanced" + ], + { + "title_aux": "ComfyUI-PromptUtilities" + } + ], + "https://github.com/nkchocoai/ComfyUI-SizeFromPresets": [ + [ + "EmptyLatentImageFromPresetsSD15", + "EmptyLatentImageFromPresetsSDXL", + "RandomEmptyLatentImageFromPresetsSD15", + "RandomEmptyLatentImageFromPresetsSDXL", + "RandomSizeFromPresetsSD15", + "RandomSizeFromPresetsSDXL", + "SizeFromPresetsSD15", + "SizeFromPresetsSDXL" + ], + { + "title_aux": "ComfyUI-SizeFromPresets" + } + ], + "https://github.com/nkchocoai/ComfyUI-TextOnSegs": [ + [ + "CalcMaxFontSize", + "ExtractDominantColor", + "GetComplementaryColor", + "SegsToRegion", + "TextOnSegsFloodFill" + ], + { + "title_aux": "ComfyUI-TextOnSegs" + } + ], + "https://github.com/noembryo/ComfyUI-noEmbryo": [ + [ + "PromptTermList1", + "PromptTermList2", + "PromptTermList3", + "PromptTermList4", + "PromptTermList5", + "PromptTermList6" + ], + { + "author": "noEmbryo", + "description": "Some useful nodes for ComfyUI", + "nickname": "noEmbryo", + "title": "noEmbryo nodes for ComfyUI", + "title_aux": "noEmbryo nodes" + } + ], + "https://github.com/nosiu/comfyui-instantId-faceswap": [ + [ + "FaceEmbed", + "FaceSwapGenerationInpaint", + "FaceSwapSetupPipeline", + "LCMLora" + ], + { + "title_aux": "ComfyUI InstantID Faceswapper" + } + ], + "https://github.com/noxinias/ComfyUI_NoxinNodes": [ + [ + "NoxinChime", + "NoxinPromptLoad", + "NoxinPromptSave", + "NoxinScaledResolution", + "NoxinSimpleMath", + "NoxinSplitPrompt" + ], + { + "title_aux": "ComfyUI_NoxinNodes" + } + ], + "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge": [ + [ + "Apply LoRA", + "DARE Merge LoRA Stack", + "Save LoRA" + ], + { + "title_aux": "ComfyUI - Apply LoRA Stacker with DARE" + } + ], + "https://github.com/ntdviet/comfyui-ext/raw/main/custom_nodes/gcLatentTunnel/gcLatentTunnel.py": [ + [ + "gcLatentTunnel" + ], + { + "title_aux": "ntdviet/comfyui-ext" + } + ], + "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92": [ + [ + "CLIPStringEncode _O", + "Chat completion _O", + "ChatGPT Simple _O", + "ChatGPT _O", + "ChatGPT compact _O", + "Chat_Completion _O", + "Chat_Message _O", + "Chat_Message_fromString _O", + "Concat Text _O", + "ConcatRandomNSP_O", + "Debug String _O", + "Debug Text _O", + "Debug Text route _O", + "Edit_image _O", + "Equation1param _O", + "Equation2params _O", + "GetImage_(Width&Height) _O", + "GetLatent_(Width&Height) _O", + "ImageScaleFactor _O", + "ImageScaleFactorSimple _O", + "LatentUpscaleFactor _O", + "LatentUpscaleFactorSimple _O", + "LatentUpscaleMultiply", + "Note _O", + "RandomNSP _O", + "Replace Text _O", + "String _O", + "Text _O", + "Text2Image _O", + "Trim Text _O", + "VAEDecodeParallel _O", + "combine_chat_messages _O", + "compine_chat_messages _O", + "concat Strings _O", + "create image _O", + "create_image _O", + "debug Completeion _O", + "debug messages_O", + "float _O", + "floatToInt _O", + "floatToText _O", + "int _O", + "intToFloat _O", + "load_openAI _O", + "replace String _O", + "replace String advanced _O", + "saveTextToFile _O", + "seed _O", + "selectLatentFromBatch _O", + "string2Image _O", + "trim String _O", + "variation_image _O" + ], + { + "title_aux": "Quality of life Suit:V2" + } + ], + "https://github.com/ostris/ostris_nodes_comfyui": [ + [ + "LLM Pipe Loader - Ostris", + "LLM Prompt Upsampling - Ostris", + "One Seed - Ostris", + "Text Box - Ostris" + ], + { + "nodename_pattern": "- Ostris$", + "title_aux": "Ostris Nodes ComfyUI" + } + ], + "https://github.com/ownimage/ComfyUI-ownimage": [ + [ + "Caching Image Loader" + ], + { + "title_aux": "ComfyUI-ownimage" + } + ], + "https://github.com/oyvindg/ComfyUI-TrollSuite": [ + [ + "BinaryImageMask", + "ImagePadding", + "LoadLastImage", + "RandomMask", + "TransparentImage" + ], + { + "title_aux": "ComfyUI-TrollSuite" + } + ], + "https://github.com/palant/extended-saveimage-comfyui": [ + [ + "SaveImageExtended" + ], + { + "title_aux": "Extended Save Image for ComfyUI" + } + ], + "https://github.com/palant/image-resize-comfyui": [ + [ + "ImageResize" + ], + { + "title_aux": "Image Resize for ComfyUI" + } + ], + "https://github.com/pants007/comfy-pants": [ + [ + "CLIPTextEncodeAIO", + "Image Make Square" + ], + { + "title_aux": "pants" + } + ], + "https://github.com/paulo-coronado/comfy_clip_blip_node": [ + [ + "CLIPTextEncodeBLIP", + "CLIPTextEncodeBLIP-2", + "Example" + ], + { + "title_aux": "comfy_clip_blip_node" + } + ], + "https://github.com/picturesonpictures/comfy_PoP": [ + [ + "AdaptiveCannyDetector_PoP", + "AnyAspectRatio", + "ConditioningMultiplier_PoP", + "ConditioningNormalizer_PoP", + "DallE3_PoP", + "LoadImageResizer_PoP", + "LoraStackLoader10_PoP", + "LoraStackLoader_PoP", + "VAEDecoderPoP", + "VAEEncoderPoP" + ], + { + "title_aux": "comfy_PoP" + } + ], + "https://github.com/pkpkTech/ComfyUI-SaveAVIF": [ + [ + "SaveAvif" + ], + { + "title_aux": "ComfyUI-SaveAVIF" + } + ], + "https://github.com/pkpkTech/ComfyUI-TemporaryLoader": [ + [ + "LoadTempCheckpoint", + "LoadTempLoRA", + "LoadTempMultiLoRA" + ], + { + "title_aux": "ComfyUI-TemporaryLoader" + } + ], + "https://github.com/pythongosssss/ComfyUI-Custom-Scripts": [ + [ + "CheckpointLoader|pysssss", + "ConstrainImageforVideo|pysssss", + "ConstrainImage|pysssss", + "LoadText|pysssss", + "LoraLoader|pysssss", + "MathExpression|pysssss", + "MultiPrimitive|pysssss", + "PlaySound|pysssss", + "Repeater|pysssss", + "ReroutePrimitive|pysssss", + "SaveText|pysssss", + "ShowText|pysssss", + "StringFunction|pysssss" + ], + { + "title_aux": "pythongosssss/ComfyUI-Custom-Scripts" + } + ], + "https://github.com/pythongosssss/ComfyUI-WD14-Tagger": [ + [ + "WD14Tagger|pysssss" + ], + { + "title_aux": "ComfyUI WD 1.4 Tagger" + } + ], + "https://github.com/ramyma/A8R8_ComfyUI_nodes": [ + [ + "Base64ImageInput", + "Base64ImageOutput" + ], + { + "title_aux": "A8R8 ComfyUI Nodes" + } + ], + "https://github.com/rcfcu2000/zhihuige-nodes-comfyui": [ + [ + "Combine ZHGMasks", + "Cover ZHGMasks", + "From ZHG pip", + "GroundingDinoModelLoader (zhihuige)", + "GroundingDinoPIPESegment (zhihuige)", + "GroundingDinoSAMSegment (zhihuige)", + "InvertMask (zhihuige)", + "SAMModelLoader (zhihuige)", + "To ZHG pip", + "ZHG FaceIndex", + "ZHG GetMaskArea", + "ZHG Image Levels", + "ZHG SaveImage", + "ZHG SmoothEdge", + "ZHG UltimateSDUpscale" + ], + { + "title_aux": "zhihuige-nodes-comfyui" + } + ], + "https://github.com/rcsaquino/comfyui-custom-nodes": [ + [ + "BackgroundRemover | rcsaquino", + "VAELoader | rcsaquino", + "VAEProcessor | rcsaquino" + ], + { + "title_aux": "rcsaquino/comfyui-custom-nodes" + } + ], + "https://github.com/receyuki/comfyui-prompt-reader-node": [ + [ + "SDBatchLoader", + "SDLoraLoader", + "SDLoraSelector", + "SDParameterExtractor", + "SDParameterGenerator", + "SDPromptMerger", + "SDPromptReader", + "SDPromptSaver", + "SDTypeConverter" + ], + { + "author": "receyuki", + "description": "ComfyUI node version of the SD Prompt Reader", + "nickname": "SD Prompt Reader", + "title": "SD Prompt Reader", + "title_aux": "comfyui-prompt-reader-node" + } + ], + "https://github.com/redhottensors/ComfyUI-Prediction": [ + [ + "AvoidErasePrediction", + "CFGPrediction", + "CombinePredictions", + "ConditionedPrediction", + "PerpNegPrediction", + "SamplerCustomPrediction", + "ScalePrediction", + "ScaledGuidancePrediction" + ], + { + "author": "RedHotTensors", + "description": "Fully customizable Classifer Free Guidance for ComfyUI", + "nickname": "ComfyUI-Prediction", + "title": "ComfyUI-Prediction", + "title_aux": "ComfyUI-Prediction" + } + ], + "https://github.com/rgthree/rgthree-comfy": [ + [], + { + "author": "rgthree", + "description": "A bunch of nodes I created that I also find useful.", + "nickname": "rgthree", + "nodename_pattern": " \\(rgthree\\)$", + "title": "Comfy Nodes", + "title_aux": "rgthree's ComfyUI Nodes" + } + ], + "https://github.com/richinsley/Comfy-LFO": [ + [ + "LFO_Pulse", + "LFO_Sawtooth", + "LFO_Sine", + "LFO_Square", + "LFO_Triangle" + ], + { + "title_aux": "Comfy-LFO" + } + ], + "https://github.com/ricklove/comfyui-ricklove": [ + [ + "RL_Crop_Resize", + "RL_Crop_Resize_Batch", + "RL_Depth16", + "RL_Finetune_Analyze", + "RL_Finetune_Analyze_Batch", + "RL_Finetune_Variable", + "RL_Image_Shadow", + "RL_Image_Threshold_Channels", + "RL_Internet_Search", + "RL_LoadImageSequence", + "RL_Optical_Flow_Dip", + "RL_SaveImageSequence", + "RL_Uncrop", + "RL_Warp_Image", + "RL_Zoe_Depth_Map_Preprocessor", + "RL_Zoe_Depth_Map_Preprocessor_Raw_Infer", + "RL_Zoe_Depth_Map_Preprocessor_Raw_Process" + ], + { + "title_aux": "comfyui-ricklove" + } + ], + "https://github.com/rklaffehn/rk-comfy-nodes": [ + [ + "RK_CivitAIAddHashes", + "RK_CivitAIMetaChecker" + ], + { + "title_aux": "rk-comfy-nodes" + } + ], + "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata": [ + [ + "SetMetadataAll", + "SetMetadataString" + ], + { + "title_aux": "ComfyUI PNG Metadata" + } + ], + "https://github.com/rui40000/RUI-Nodes": [ + [ + "ABCondition", + "CharacterCount" + ], + { + "title_aux": "RUI-Nodes" + } + ], + "https://github.com/s1dlx/comfy_meh/raw/main/meh.py": [ + [ + "MergingExecutionHelper" + ], + { + "title_aux": "comfy_meh" + } + ], + "https://github.com/seanlynch/comfyui-optical-flow": [ + [ + "Apply optical flow", + "Compute optical flow", + "Visualize optical flow" + ], + { + "title_aux": "ComfyUI Optical Flow" + } + ], + "https://github.com/seanlynch/srl-nodes": [ + [ + "SRL Conditional Interrrupt", + "SRL Eval", + "SRL Filter Image List", + "SRL Format String" + ], + { + "title_aux": "SRL's nodes" + } + ], + "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack": [ + [ + "ImageResizeAndCropNode", + "ImageSquareAdapterNode" + ], + { + "title_aux": "ComfyUI_Nimbus-Pack" + } + ], + "https://github.com/shadowcz007/comfyui-consistency-decoder": [ + [ + "VAEDecodeConsistencyDecoder", + "VAELoaderConsistencyDecoder" + ], + { + "title_aux": "Consistency Decoder" + } + ], + "https://github.com/shadowcz007/comfyui-mixlab-nodes": [ + [ + "3DImage", + "AppInfo", + "AreaToMask", + "CenterImage", + "CharacterInText", + "ChatGPTOpenAI", + "CkptNames_", + "Color", + "DynamicDelayProcessor", + "EmbeddingPrompt", + "EnhanceImage", + "FaceToMask", + "FeatheredMask", + "FloatSlider", + "FloatingVideo", + "Font", + "GamePal", + "GetImageSize_", + "GradientImage", + "GridOutput", + "ImageColorTransfer", + "ImageCropByAlpha", + "IntNumber", + "JoinWithDelimiter", + "LaMaInpainting", + "LimitNumber", + "LoadImagesFromPath", + "LoadImagesFromURL", + "LoraNames_", + "MergeLayers", + "MirroredImage", + "MultiplicationNode", + "NewLayer", + "NoiseImage", + "OutlineMask", + "PromptImage", + "PromptSimplification", + "PromptSlide", + "RandomPrompt", + "ResizeImageMixlab", + "SamplerNames_", + "SaveImageToLocal", + "ScreenShare", + "Seed_", + "ShowLayer", + "ShowTextForGPT", + "SmoothMask", + "SpeechRecognition", + "SpeechSynthesis", + "SplitImage", + "SplitLongMask", + "SvgImage", + "SwitchByIndex", + "TESTNODE_", + "TESTNODE_TOKEN", + "TextImage", + "TextInput_", + "TextToNumber", + "TransparentImage", + "VAEDecodeConsistencyDecoder", + "VAELoaderConsistencyDecoder" + ], + { + "title_aux": "comfyui-mixlab-nodes" + } + ], + "https://github.com/shadowcz007/comfyui-ultralytics-yolo": [ + [ + "DetectByLabel" + ], + { + "title_aux": "comfyui-ultralytics-yolo" + } + ], + "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus": [ + [ + "PhotoMakerEncodePlus", + "PhotoMakerStyles", + "PrepImagesForClipVisionFromPath" + ], + { + "title_aux": "ComfyUI PhotoMaker Plus" + } + ], + "https://github.com/shiimizu/ComfyUI-TiledDiffusion": [ + [ + "NoiseInversion", + "TiledDiffusion", + "VAEDecodeTiled_TiledDiffusion", + "VAEEncodeTiled_TiledDiffusion" + ], + { + "title_aux": "Tiled Diffusion & VAE for ComfyUI" + } + ], + "https://github.com/shiimizu/ComfyUI_smZNodes": [ + [ + "smZ CLIPTextEncode", + "smZ Settings" + ], + { + "title_aux": "smZNodes" + } + ], + "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage": [ + [ + "SDXL Empty Latent Image" + ], + { + "title_aux": "ComfyUI-SDXL-EmptyLatentImage" + } + ], + "https://github.com/shingo1228/ComfyUI-send-eagle-slim": [ + [ + "Send Eagle with text", + "Send Webp Image to Eagle" + ], + { + "title_aux": "ComfyUI-send-Eagle(slim)" + } + ], + "https://github.com/shockz0rz/ComfyUI_InterpolateEverything": [ + [ + "OpenposePreprocessorInterpolate" + ], + { + "title_aux": "InterpolateEverything" + } + ], + "https://github.com/shockz0rz/comfy-easy-grids": [ + [ + "FloatToText", + "GridFloatList", + "GridFloats", + "GridIntList", + "GridInts", + "GridLoras", + "GridStringList", + "GridStrings", + "ImageGridCommander", + "IntToText", + "SaveImageGrid", + "TextConcatenator" + ], + { + "title_aux": "comfy-easy-grids" + } + ], + "https://github.com/siliconflow/onediff_comfy_nodes": [ + [ + "CompareModel", + "ControlNetGraphLoader", + "ControlNetGraphSaver", + "ControlNetSpeedup", + "ModelGraphLoader", + "ModelGraphSaver", + "ModelSpeedup", + "ModuleDeepCacheSpeedup", + "OneDiffCheckpointLoaderSimple", + "SVDSpeedup", + "ShowImageDiff", + "VaeGraphLoader", + "VaeGraphSaver", + "VaeSpeedup" + ], + { + "title_aux": "OneDiff Nodes" + } + ], + "https://github.com/sipherxyz/comfyui-art-venture": [ + [ + "AV_CheckpointMerge", + "AV_CheckpointModelsToParametersPipe", + "AV_CheckpointSave", + "AV_ControlNetEfficientLoader", + "AV_ControlNetEfficientLoaderAdvanced", + "AV_ControlNetEfficientStacker", + "AV_ControlNetEfficientStackerSimple", + "AV_ControlNetLoader", + "AV_ControlNetPreprocessor", + "AV_LoraListLoader", + "AV_LoraListStacker", + "AV_LoraLoader", + "AV_ParametersPipeToCheckpointModels", + "AV_ParametersPipeToPrompts", + "AV_PromptsToParametersPipe", + "AV_SAMLoader", + "AV_VAELoader", + "AspectRatioSelector", + "BLIPCaption", + "BLIPLoader", + "BooleanPrimitive", + "ColorBlend", + "ColorCorrect", + "DeepDanbooruCaption", + "DependenciesEdit", + "Fooocus_KSampler", + "Fooocus_KSamplerAdvanced", + "GetBoolFromJson", + "GetFloatFromJson", + "GetIntFromJson", + "GetObjectFromJson", + "GetSAMEmbedding", + "GetTextFromJson", + "ISNetLoader", + "ISNetSegment", + "ImageAlphaComposite", + "ImageApplyChannel", + "ImageExtractChannel", + "ImageGaussianBlur", + "ImageMuxer", + "ImageRepeat", + "ImageScaleDown", + "ImageScaleDownBy", + "ImageScaleDownToSize", + "ImageScaleToMegapixels", + "LaMaInpaint", + "LoadImageAsMaskFromUrl", + "LoadImageFromUrl", + "LoadJsonFromUrl", + "MergeModels", + "NumberScaler", + "OverlayInpaintedImage", + "OverlayInpaintedLatent", + "PrepareImageAndMaskForInpaint", + "QRCodeGenerator", + "RandomFloat", + "RandomInt", + "SAMEmbeddingToImage", + "SDXLAspectRatioSelector", + "SDXLPromptStyler", + "SeedSelector", + "StringToInt", + "StringToNumber" + ], + { + "title_aux": "comfyui-art-venture" + } + ], + "https://github.com/skfoo/ComfyUI-Coziness": [ + [ + "LoraTextExtractor-b1f83aa2", + "MultiLoraLoader-70bf3d77" + ], + { + "title_aux": "ComfyUI-Coziness" + } + ], + "https://github.com/smagnetize/kb-comfyui-nodes": [ + [ + "SingleImageDataUrlLoader" + ], + { + "title_aux": "kb-comfyui-nodes" + } + ], + "https://github.com/space-nuko/ComfyUI-Disco-Diffusion": [ + [ + "DiscoDiffusion_DiscoDiffusion", + "DiscoDiffusion_DiscoDiffusionExtraSettings", + "DiscoDiffusion_GuidedDiffusionLoader", + "DiscoDiffusion_OpenAICLIPLoader" + ], + { + "title_aux": "Disco Diffusion" + } + ], + "https://github.com/space-nuko/ComfyUI-OpenPose-Editor": [ + [ + "Nui.OpenPoseEditor" + ], + { + "title_aux": "OpenPose Editor" + } + ], + "https://github.com/space-nuko/nui-suite": [ + [ + "Nui.DynamicPromptsTextGen", + "Nui.FeelingLuckyTextGen", + "Nui.OutputString" + ], + { + "title_aux": "nui suite" + } + ], + "https://github.com/spacepxl/ComfyUI-HQ-Image-Save": [ + [ + "LoadEXR", + "LoadLatentEXR", + "SaveEXR", + "SaveLatentEXR", + "SaveTiff" + ], + { + "title_aux": "ComfyUI-HQ-Image-Save" + } + ], + "https://github.com/spacepxl/ComfyUI-Image-Filters": [ + [ + "AdainImage", + "AdainLatent", + "AlphaClean", + "AlphaMatte", + "BatchAlign", + "BatchAverageImage", + "BatchAverageUnJittered", + "BatchNormalizeImage", + "BatchNormalizeLatent", + "BlurImageFast", + "BlurMaskFast", + "ClampOutliers", + "ConvertNormals", + "DifferenceChecker", + "DilateErodeMask", + "EnhanceDetail", + "ExposureAdjust", + "GuidedFilterAlpha", + "ImageConstant", + "ImageConstantHSV", + "JitterImage", + "Keyer", + "LatentStats", + "NormalMapSimple", + "OffsetLatentImage", + "RemapRange", + "Tonemap", + "UnJitterImage", + "UnTonemap" + ], + { + "title_aux": "ComfyUI-Image-Filters" + } + ], + "https://github.com/spacepxl/ComfyUI-RAVE": [ + [ + "ConditioningDebug", + "ImageGridCompose", + "ImageGridDecompose", + "KSamplerRAVE", + "LatentGridCompose", + "LatentGridDecompose" + ], + { + "title_aux": "ComfyUI-RAVE" + } + ], + "https://github.com/spinagon/ComfyUI-seam-carving": [ + [ + "SeamCarving" + ], + { + "title_aux": "ComfyUI-seam-carving" + } + ], + "https://github.com/spinagon/ComfyUI-seamless-tiling": [ + [ + "CircularVAEDecode", + "MakeCircularVAE", + "OffsetImage", + "SeamlessTile" + ], + { + "title_aux": "Seamless tiling Node for ComfyUI" + } + ], + "https://github.com/spro/comfyui-mirror": [ + [ + "LatentMirror" + ], + { + "title_aux": "Latent Mirror node for ComfyUI" + } + ], + "https://github.com/ssitu/ComfyUI_UltimateSDUpscale": [ + [ + "UltimateSDUpscale", + "UltimateSDUpscaleNoUpscale" + ], + { + "title_aux": "UltimateSDUpscale" + } + ], + "https://github.com/ssitu/ComfyUI_fabric": [ + [ + "FABRICPatchModel", + "FABRICPatchModelAdv", + "KSamplerAdvFABRICAdv", + "KSamplerFABRIC", + "KSamplerFABRICAdv" + ], + { + "title_aux": "ComfyUI fabric" + } + ], + "https://github.com/ssitu/ComfyUI_restart_sampling": [ + [ + "KRestartSampler", + "KRestartSamplerAdv", + "KRestartSamplerSimple" + ], + { + "title_aux": "Restart Sampling" + } + ], + "https://github.com/ssitu/ComfyUI_roop": [ + [ + "RoopImproved", + "roop" + ], + { + "title_aux": "ComfyUI roop" + } + ], + "https://github.com/storyicon/comfyui_segment_anything": [ + [ + "GroundingDinoModelLoader (segment anything)", + "GroundingDinoSAMSegment (segment anything)", + "InvertMask (segment anything)", + "IsMaskEmpty", + "SAMModelLoader (segment anything)" + ], + { + "title_aux": "segment anything" + } + ], + "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score": [ + [ + "AesthetlcScoreSorter", + "CalculateAestheticScore", + "LoadAesteticModel", + "ScoreToNumber" + ], + { + "title_aux": "ComfyUI_Strimmlarns_aesthetic_score" + } + ], + "https://github.com/styler00dollar/ComfyUI-deepcache": [ + [ + "DeepCache" + ], + { + "title_aux": "ComfyUI-deepcache" + } + ], + "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale": [ + [ + "SudoLatentUpscale" + ], + { + "title_aux": "ComfyUI-sudo-latent-upscale" + } + ], + "https://github.com/syllebra/bilbox-comfyui": [ + [ + "BilboXLut", + "BilboXPhotoPrompt", + "BilboXVignette" + ], + { + "title_aux": "BilboX's ComfyUI Custom Nodes" + } + ], + "https://github.com/sylym/comfy_vid2vid": [ + [ + "CheckpointLoaderSimpleSequence", + "DdimInversionSequence", + "KSamplerSequence", + "LoadImageMaskSequence", + "LoadImageSequence", + "LoraLoaderSequence", + "SetLatentNoiseSequence", + "TrainUnetSequence", + "VAEEncodeForInpaintSequence" + ], + { + "title_aux": "Vid2vid" + } + ], + "https://github.com/szhublox/ambw_comfyui": [ + [ + "Auto Merge Block Weighted", + "CLIPMergeSimple", + "CheckpointSave", + "ModelMergeBlocks", + "ModelMergeSimple" + ], + { + "title_aux": "Auto-MBW" + } + ], + "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes/raw/main/SyrianFalconNodes.py": [ + [ + "CompositeImage", + "KSamplerAlternate", + "KSamplerPromptEdit", + "KSamplerPromptEditAndAlternate", + "LoopBack", + "QRGenerate", + "WordAsImage" + ], + { + "title_aux": "Syrian Falcon Nodes" + } + ], + "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy": [ + [ + "ComfyNodesToSaveCanvas", + "FloatNumber", + "FreeU_LCM", + "ImageOutputToComfyNodes", + "ImageShuffle", + "ImageSwitch", + "LCMGenerate", + "LCMGenerate_ReferenceOnly", + "LCMGenerate_SDTurbo", + "LCMGenerate_img2img", + "LCMGenerate_img2img_IPAdapter", + "LCMGenerate_img2img_controlnet", + "LCMGenerate_inpaintv2", + "LCMGenerate_inpaintv3", + "LCMLoader", + "LCMLoader_RefInpaint", + "LCMLoader_ReferenceOnly", + "LCMLoader_SDTurbo", + "LCMLoader_controlnet", + "LCMLoader_controlnet_inpaint", + "LCMLoader_img2img", + "LCMLoraLoader_inpaint", + "LCMLoraLoader_ipadapter", + "LCMLora_inpaint", + "LCMLora_ipadapter", + "LCMT2IAdapter", + "LCM_IPAdapter", + "LCM_IPAdapter_inpaint", + "LCM_outpaint_prep", + "LoadImageNode_LCM", + "Loader_SegmindVega", + "OutpaintCanvasTool", + "SaveImage_Canvas", + "SaveImage_LCM", + "SaveImage_Puzzle", + "SaveImage_PuzzleV2", + "SegmindVega", + "SettingsSwitch", + "stitch" + ], + { + "title_aux": "LCM_Inpaint-Outpaint_Comfy" + } + ], + "https://github.com/talesofai/comfyui-browser": [ + [ + "DifyTextGenerator //Browser", + "LoadImageByUrl //Browser", + "SelectInputs //Browser", + "UploadToRemote //Browser", + "XyzPlot //Browser" + ], + { + "title_aux": "ComfyUI Browser" + } + ], + "https://github.com/theUpsider/ComfyUI-Logic": [ + [ + "Bool", + "Compare", + "DebugPrint", + "Float", + "If ANY execute A else B", + "Int", + "String" + ], + { + "title_aux": "ComfyUI-Logic" + } + ], + "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader": [ + [ + "Load Styles CSV" + ], + { + "title_aux": "Styles CSV Loader Extension for ComfyUI" + } + ], + "https://github.com/thecooltechguy/ComfyUI-MagicAnimate": [ + [ + "MagicAnimate", + "MagicAnimateModelLoader" + ], + { + "title_aux": "ComfyUI-MagicAnimate" + } + ], + "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion": [ + [ + "SVDDecoder", + "SVDModelLoader", + "SVDSampler", + "SVDSimpleImg2Vid" + ], + { + "title_aux": "ComfyUI Stable Video Diffusion" + } + ], + "https://github.com/thedyze/save-image-extended-comfyui": [ + [ + "SaveImageExtended" + ], + { + "title_aux": "Save Image Extended for ComfyUI" + } + ], + "https://github.com/tocubed/ComfyUI-AudioReactor": [ + [ + "AudioFrameTransformBeats", + "AudioFrameTransformShadertoy", + "AudioLoadPath", + "Shadertoy" + ], + { + "title_aux": "ComfyUI-AudioReactor" + } + ], + "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes": [ + [ + "CaptureWebcam", + "LatentDelay", + "LoadWebcamImage", + "SaveImagetoPath" + ], + { + "title_aux": "ComfyUI_toyxyz_test_nodes" + } + ], + "https://github.com/trojblue/trNodes": [ + [ + "JpgConvertNode", + "trColorCorrection", + "trLayering", + "trRouter", + "trRouterLonger" + ], + { + "title_aux": "trNodes" + } + ], + "https://github.com/trumanwong/ComfyUI-NSFW-Detection": [ + [ + "NSFWDetection" + ], + { + "title_aux": "ComfyUI-NSFW-Detection" + } + ], + "https://github.com/ttulttul/ComfyUI-Iterative-Mixer": [ + [ + "Batch Unsampler", + "Iterative Mixing KSampler", + "Iterative Mixing KSampler Advanced", + "IterativeMixingSampler", + "IterativeMixingScheduler", + "IterativeMixingSchedulerAdvanced", + "Latent Batch Comparison Plot", + "Latent Batch Statistics Plot", + "MixingMaskGenerator" + ], + { + "title_aux": "ComfyUI Iterative Mixing Nodes" + } + ], + "https://github.com/ttulttul/ComfyUI-Tensor-Operations": [ + [ + "Image Match Normalize", + "Latent Match Normalize" + ], + { + "title_aux": "ComfyUI-Tensor-Operations" + } + ], + "https://github.com/tudal/Hakkun-ComfyUI-nodes/raw/main/hakkun_nodes.py": [ + [ + "Any Converter", + "Calculate Upscale", + "Image Resize To Height", + "Image Resize To Width", + "Image size to string", + "Load Random Image", + "Load Text", + "Multi Text Merge", + "Prompt Parser", + "Random Line", + "Random Line 4" + ], + { + "title_aux": "Hakkun-ComfyUI-nodes" + } + ], + "https://github.com/tusharbhutt/Endless-Nodes": [ + [ + "ESS Aesthetic Scoring", + "ESS Aesthetic Scoring Auto", + "ESS Combo Parameterizer", + "ESS Combo Parameterizer & Prompts", + "ESS Eight Input Random", + "ESS Eight Input Text Switch", + "ESS Float to Integer", + "ESS Float to Number", + "ESS Float to String", + "ESS Float to X", + "ESS Global Envoy", + "ESS Image Reward", + "ESS Image Reward Auto", + "ESS Image Saver with JSON", + "ESS Integer to Float", + "ESS Integer to Number", + "ESS Integer to String", + "ESS Integer to X", + "ESS Number to Float", + "ESS Number to Integer", + "ESS Number to String", + "ESS Number to X", + "ESS Parameterizer", + "ESS Parameterizer & Prompts", + "ESS Six Float Output", + "ESS Six Input Random", + "ESS Six Input Text Switch", + "ESS Six Integer IO Switch", + "ESS Six Integer IO Widget", + "ESS String to Float", + "ESS String to Integer", + "ESS String to Num", + "ESS String to X", + "\u267e\ufe0f\ud83c\udf0a\u2728 Image Saver with JSON" + ], + { + "author": "BiffMunky", + "description": "A small set of nodes I created for various numerical and text inputs. Features image saver with ability to have JSON saved to separate folder, parameter collection nodes, two aesthetic scoring models, switches for text and numbers, and conversion of string to numeric and vice versa.", + "nickname": "\u267e\ufe0f\ud83c\udf0a\u2728", + "title": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes", + "title_aux": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes" + } + ], + "https://github.com/twri/sdxl_prompt_styler": [ + [ + "SDXLPromptStyler", + "SDXLPromptStylerAdvanced" + ], + { + "title_aux": "SDXL Prompt Styler" + } + ], + "https://github.com/uarefans/ComfyUI-Fans": [ + [ + "Fans Prompt Styler Negative", + "Fans Prompt Styler Positive", + "Fans Styler", + "Fans Text Concatenate" + ], + { + "title_aux": "ComfyUI-Fans" + } + ], + "https://github.com/vanillacode314/SimpleWildcardsComfyUI": [ + [ + "SimpleConcat", + "SimpleWildcard" + ], + { + "author": "VanillaCode314", + "description": "A simple wildcard node for ComfyUI. Can also be used a style prompt node.", + "nickname": "Simple Wildcard", + "title": "Simple Wildcard", + "title_aux": "Simple Wildcard" + } + ], + "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration": [ + [ + "ChatGptPrompt" + ], + { + "title_aux": "ComfyUI-Chat-GPT-Integration" + } + ], + "https://github.com/violet-chen/comfyui-psd2png": [ + [ + "Psd2Png" + ], + { + "title_aux": "comfyui-psd2png" + } + ], + "https://github.com/wallish77/wlsh_nodes": [ + [ + "Alternating KSampler (WLSH)", + "Build Filename String (WLSH)", + "CLIP +/- w/Text Unified (WLSH)", + "CLIP Positive-Negative (WLSH)", + "CLIP Positive-Negative XL (WLSH)", + "CLIP Positive-Negative XL w/Text (WLSH)", + "CLIP Positive-Negative w/Text (WLSH)", + "Checkpoint Loader w/Name (WLSH)", + "Empty Latent by Pixels (WLSH)", + "Empty Latent by Ratio (WLSH)", + "Empty Latent by Size (WLSH)", + "Generate Border Mask (WLSH)", + "Grayscale Image (WLSH)", + "Image Load with Metadata (WLSH)", + "Image Save with Prompt (WLSH)", + "Image Save with Prompt File (WLSH)", + "Image Save with Prompt/Info (WLSH)", + "Image Save with Prompt/Info File (WLSH)", + "Image Scale By Factor (WLSH)", + "Image Scale by Shortside (WLSH)", + "KSamplerAdvanced (WLSH)", + "Multiply Integer (WLSH)", + "Outpaint to Image (WLSH)", + "Prompt Weight (WLSH)", + "Quick Resolution Multiply (WLSH)", + "Resolutions by Ratio (WLSH)", + "SDXL Quick Empty Latent (WLSH)", + "SDXL Quick Image Scale (WLSH)", + "SDXL Resolutions (WLSH)", + "SDXL Steps (WLSH)", + "Save Positive Prompt(WLSH)", + "Save Prompt (WLSH)", + "Save Prompt/Info (WLSH)", + "Seed and Int (WLSH)", + "Seed to Number (WLSH)", + "Simple Pattern Replace (WLSH)", + "Simple String Combine (WLSH)", + "Time String (WLSH)", + "Upscale by Factor with Model (WLSH)", + "VAE Encode for Inpaint w/Padding (WLSH)" + ], + { + "title_aux": "wlsh_nodes" + } + ], + "https://github.com/whatbirdisthat/cyberdolphin": [ + [ + "\ud83d\udc2c Gradio ChatInterface", + "\ud83d\udc2c OpenAI Advanced", + "\ud83d\udc2c OpenAI Compatible", + "\ud83d\udc2c OpenAI DALL\u00b7E", + "\ud83d\udc2c OpenAI Simple" + ], + { + "title_aux": "cyberdolphin" + } + ], + "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus": [ + [ + "CDL.OpenPoseEditorPlus" + ], + { + "title_aux": "ComfyUI-Openpose-Editor-Plus" + } + ], + "https://github.com/wmatson/easy-comfy-nodes": [ + [ + "EZAssocDictNode", + "EZAssocImgNode", + "EZAssocStrNode", + "EZEmptyDictNode", + "EZHttpPostNode", + "EZLoadImgBatchFromUrlsNode", + "EZLoadImgFromUrlNode", + "EZRemoveImgBackground", + "EZS3Uploader", + "EZVideoCombiner" + ], + { + "title_aux": "easy-comfy-nodes" + } + ], + "https://github.com/wolfden/ComfyUi_PromptStylers": [ + [ + "SDXLPromptStylerAll", + "SDXLPromptStylerHorror", + "SDXLPromptStylerMisc", + "SDXLPromptStylerbyArtist", + "SDXLPromptStylerbyCamera", + "SDXLPromptStylerbyComposition", + "SDXLPromptStylerbyCyberpunkSurrealism", + "SDXLPromptStylerbyDepth", + "SDXLPromptStylerbyEnvironment", + "SDXLPromptStylerbyFantasySetting", + "SDXLPromptStylerbyFilter", + "SDXLPromptStylerbyFocus", + "SDXLPromptStylerbyImpressionism", + "SDXLPromptStylerbyLighting", + "SDXLPromptStylerbyMileHigh", + "SDXLPromptStylerbyMood", + "SDXLPromptStylerbyMythicalCreature", + "SDXLPromptStylerbyOriginal", + "SDXLPromptStylerbyQuantumRealism", + "SDXLPromptStylerbySteamPunkRealism", + "SDXLPromptStylerbySubject", + "SDXLPromptStylerbySurrealism", + "SDXLPromptStylerbyTheme", + "SDXLPromptStylerbyTimeofDay", + "SDXLPromptStylerbyWyvern", + "SDXLPromptbyCelticArt", + "SDXLPromptbyContemporaryNordicArt", + "SDXLPromptbyFashionArt", + "SDXLPromptbyGothicRevival", + "SDXLPromptbyIrishFolkArt", + "SDXLPromptbyRomanticNationalismArt", + "SDXLPromptbySportsArt", + "SDXLPromptbyStreetArt", + "SDXLPromptbyVikingArt", + "SDXLPromptbyWildlifeArt" + ], + { + "title_aux": "SDXL Prompt Styler (customized version by wolfden)" + } + ], + "https://github.com/wolfden/ComfyUi_String_Function_Tree": [ + [ + "StringFunction" + ], + { + "title_aux": "ComfyUi_String_Function_Tree" + } + ], + "https://github.com/wsippel/comfyui_ws/raw/main/sdxl_utility.py": [ + [ + "SDXLResolutionPresets" + ], + { + "title_aux": "SDXLResolutionPresets" + } + ], + "https://github.com/wutipong/ComfyUI-TextUtils": [ + [ + "Text Utils - Join N-Elements of String List", + "Text Utils - Join String List", + "Text Utils - Join Strings", + "Text Utils - Split String to List" + ], + { + "title_aux": "ComfyUI-TextUtils" + } + ], + "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio": [ + [ + "SimpleAspectRatio" + ], + { + "title_aux": "ComfyUI-Simple-Aspect-Ratio" + } + ], + "https://github.com/xXAdonesXx/NodeGPT": [ + [ + "AppendAgent", + "Assistant", + "Chat", + "ChatGPT", + "CombineInput", + "Conditioning", + "CostumeAgent_1", + "CostumeAgent_2", + "CostumeMaster_1", + "Critic", + "DisplayString", + "DisplayTextAsImage", + "EVAL", + "Engineer", + "Executor", + "GroupChat", + "Image_generation_Conditioning", + "LM_Studio", + "LoadAPIconfig", + "LoadTXT", + "MemGPT", + "Memory_Excel", + "Model_1", + "Ollama", + "Output2String", + "Planner", + "Scientist", + "TextCombine", + "TextGeneration", + "TextGenerator", + "TextInput", + "TextOutput", + "UserProxy", + "llama-cpp", + "llava", + "oobaboogaOpenAI" + ], + { + "title_aux": "NodeGPT" + } + ], + "https://github.com/xiaoxiaodesha/hd_node": [ + [ + "Combine HDMasks", + "Cover HDMasks", + "HD FaceIndex", + "HD GetMaskArea", + "HD Image Levels", + "HD SmoothEdge", + "HD UltimateSDUpscale" + ], + { + "title_aux": "hd-nodes-comfyui" + } + ], + "https://github.com/yffyhk/comfyui_auto_danbooru": [ + [ + "GetDanbooru", + "TagEncode" + ], + { + "title_aux": "comfyui_auto_danbooru" + } + ], + "https://github.com/yolain/ComfyUI-Easy-Use": [ + [ + "dynamicThresholdingFull", + "easy LLLiteLoader", + "easy XYInputs: CFG Scale", + "easy XYInputs: Checkpoint", + "easy XYInputs: ControlNet", + "easy XYInputs: Denoise", + "easy XYInputs: Lora", + "easy XYInputs: ModelMergeBlocks", + "easy XYInputs: NegativeCond", + "easy XYInputs: NegativeCondList", + "easy XYInputs: PositiveCond", + "easy XYInputs: PositiveCondList", + "easy XYInputs: PromptSR", + "easy XYInputs: Sampler/Scheduler", + "easy XYInputs: Seeds++ Batch", + "easy XYInputs: Steps", + "easy XYPlot", + "easy XYPlotAdvanced", + "easy a1111Loader", + "easy boolean", + "easy cascadeLoader", + "easy cleanGpuUsed", + "easy comfyLoader", + "easy compare", + "easy controlnetLoader", + "easy controlnetLoaderADV", + "easy convertAnything", + "easy detailerFix", + "easy float", + "easy fooocusInpaintLoader", + "easy fullLoader", + "easy fullkSampler", + "easy globalSeed", + "easy hiresFix", + "easy if", + "easy imageInsetCrop", + "easy imagePixelPerfect", + "easy imageRemoveBG", + "easy imageSave", + "easy imageScaleDown", + "easy imageScaleDownBy", + "easy imageScaleDownToSize", + "easy imageSize", + "easy imageSizeByLongerSide", + "easy imageSizeBySide", + "easy imageSwitch", + "easy imageToMask", + "easy int", + "easy isSDXL", + "easy joinImageBatch", + "easy kSampler", + "easy kSamplerDownscaleUnet", + "easy kSamplerInpainting", + "easy kSamplerSDTurbo", + "easy kSamplerTiled", + "easy latentCompositeMaskedWithCond", + "easy latentNoisy", + "easy loraStack", + "easy negative", + "easy pipeIn", + "easy pipeOut", + "easy pipeToBasicPipe", + "easy portraitMaster", + "easy poseEditor", + "easy positive", + "easy preDetailerFix", + "easy preSampling", + "easy preSamplingAdvanced", + "easy preSamplingCascade", + "easy preSamplingDynamicCFG", + "easy preSamplingSdTurbo", + "easy promptList", + "easy rangeFloat", + "easy rangeInt", + "easy samLoaderPipe", + "easy seed", + "easy showAnything", + "easy showLoaderSettingsNames", + "easy showSpentTime", + "easy string", + "easy stylesSelector", + "easy svdLoader", + "easy ultralyticsDetectorPipe", + "easy unSampler", + "easy wildcards", + "easy xyAny", + "easy zero123Loader" + ], + { + "title_aux": "ComfyUI Easy Use" + } + ], + "https://github.com/yolanother/DTAIComfyImageSubmit": [ + [ + "DTSimpleSubmitImage", + "DTSubmitImage" + ], + { + "title_aux": "Comfy AI DoubTech.ai Image Sumission Node" + } + ], + "https://github.com/yolanother/DTAIComfyLoaders": [ + [ + "DTCLIPLoader", + "DTCLIPVisionLoader", + "DTCheckpointLoader", + "DTCheckpointLoaderSimple", + "DTControlNetLoader", + "DTDiffControlNetLoader", + "DTDiffusersLoader", + "DTGLIGENLoader", + "DTLoadImage", + "DTLoadImageMask", + "DTLoadLatent", + "DTLoraLoader", + "DTLorasLoader", + "DTStyleModelLoader", + "DTUpscaleModelLoader", + "DTVAELoader", + "DTunCLIPCheckpointLoader" + ], + { + "title_aux": "Comfy UI Online Loaders" + } + ], + "https://github.com/yolanother/DTAIComfyPromptAgent": [ + [ + "DTPromptAgent", + "DTPromptAgentString" + ], + { + "title_aux": "Comfy UI Prompt Agent" + } + ], + "https://github.com/yolanother/DTAIComfyQRCodes": [ + [ + "QRCode" + ], + { + "title_aux": "Comfy UI QR Codes" + } + ], + "https://github.com/yolanother/DTAIComfyVariables": [ + [ + "DTCLIPTextEncode", + "DTSingleLineStringVariable", + "DTSingleLineStringVariableNoClip", + "FloatVariable", + "IntVariable", + "StringFormat", + "StringFormatSingleLine", + "StringVariable" + ], + { + "title_aux": "Variables for Comfy UI" + } + ], + "https://github.com/yolanother/DTAIImageToTextNode": [ + [ + "DTAIImageToTextNode", + "DTAIImageUrlToTextNode" + ], + { + "title_aux": "Image to Text Node" + } + ], + "https://github.com/youyegit/tdxh_node_comfyui": [ + [ + "TdxhBoolNumber", + "TdxhClipVison", + "TdxhControlNetApply", + "TdxhControlNetProcessor", + "TdxhFloatInput", + "TdxhImageToSize", + "TdxhImageToSizeAdvanced", + "TdxhImg2ImgLatent", + "TdxhIntInput", + "TdxhLoraLoader", + "TdxhOnOrOff", + "TdxhReference", + "TdxhStringInput", + "TdxhStringInputTranslator" + ], + { + "title_aux": "tdxh_node_comfyui" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Pronodes": [ + [ + "LoadYoutubeVideoNode" + ], + { + "title_aux": "ComfyUI-Pronodes" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Whisper": [ + [ + "Add Subtitles To Background", + "Add Subtitles To Frames", + "Apply Whisper", + "Resize Cropped Subtitles" + ], + { + "title_aux": "ComfyUI Whisper" + } + ], + "https://github.com/zcfrank1st/Comfyui-Toolbox": [ + [ + "PreviewJson", + "PreviewVideo", + "SaveJson", + "TestJsonPreview" + ], + { + "title_aux": "Comfyui-Toolbox" + } + ], + "https://github.com/zcfrank1st/Comfyui-Yolov8": [ + [ + "Yolov8Detection", + "Yolov8Segmentation" + ], + { + "title_aux": "ComfyUI Yolov8" + } + ], + "https://github.com/zcfrank1st/comfyui_visual_anagrams": [ + [ + "VisualAnagramsAnimate", + "VisualAnagramsSample" + ], + { + "title_aux": "comfyui_visual_anagram" + } + ], + "https://github.com/zer0TF/cute-comfy": [ + [ + "Cute.Placeholder" + ], + { + "title_aux": "Cute Comfy" + } + ], + "https://github.com/zfkun/ComfyUI_zfkun": [ + [ + "ZFLoadImagePath", + "ZFPreviewText", + "ZFPreviewTextMultiline", + "ZFShareScreen", + "ZFTextTranslation" + ], + { + "title_aux": "ComfyUI_zfkun" + } + ], + "https://github.com/zhongpei/ComfyUI-InstructIR": [ + [ + "InstructIRProcess", + "LoadInstructIRModel" + ], + { + "title_aux": "ComfyUI for InstructIR" + } + ], + "https://github.com/zhongpei/Comfyui_image2prompt": [ + [ + "Image2Text", + "LoadImage2TextModel" + ], + { + "title_aux": "Comfyui_image2prompt" + } + ], + "https://github.com/zhuanqianfish/ComfyUI-EasyNode": [ + [ + "EasyCaptureNode", + "EasyVideoOutputNode", + "SendImageWebSocket" + ], + { + "title_aux": "EasyCaptureNode for ComfyUI" + } + ], + "https://raw.githubusercontent.com/throttlekitty/SDXLCustomAspectRatio/main/SDXLAspectRatio.py": [ + [ + "SDXLAspectRatio" + ], + { + "title_aux": "SDXLCustomAspectRatio" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/git_helper.py b/custom_nodes/ComfyUI-Manager/git_helper.py new file mode 100644 index 0000000000000000000000000000000000000000..e6a85b354cd4debc38003f3fd6080b168b5e0b48 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/git_helper.py @@ -0,0 +1,319 @@ +import sys +import os +import git +import configparser +import re +import json +from torchvision.datasets.utils import download_url +from tqdm.auto import tqdm +from git.remote import RemoteProgress + +config_path = os.path.join(os.path.dirname(__file__), "config.ini") +nodelist_path = os.path.join(os.path.dirname(__file__), "custom-node-list.json") +working_directory = os.getcwd() + + +class GitProgress(RemoteProgress): + def __init__(self): + super().__init__() + self.pbar = tqdm(ascii=True) + + def update(self, op_code, cur_count, max_count=None, message=''): + self.pbar.total = max_count + self.pbar.n = cur_count + self.pbar.pos = 0 + self.pbar.refresh() + + +def gitclone(custom_nodes_path, url, target_hash=None): + repo_name = os.path.splitext(os.path.basename(url))[0] + repo_path = os.path.join(custom_nodes_path, repo_name) + + # Clone the repository from the remote URL + repo = git.Repo.clone_from(url, repo_path, recursive=True, progress=GitProgress()) + + if target_hash is not None: + print(f"CHECKOUT: {repo_name} [{target_hash}]") + repo.git.checkout(target_hash) + + repo.git.clear_cache() + repo.close() + + +def gitcheck(path, do_fetch=False): + try: + # Fetch the latest commits from the remote repository + repo = git.Repo(path) + + if repo.head.is_detached: + print("CUSTOM NODE CHECK: True") + return + + current_branch = repo.active_branch + branch_name = current_branch.name + + remote_name = 'origin' + remote = repo.remote(name=remote_name) + + if do_fetch: + remote.fetch() + + # Get the current commit hash and the commit hash of the remote branch + commit_hash = repo.head.commit.hexsha + remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha + + # Compare the commit hashes to determine if the local repository is behind the remote repository + if commit_hash != remote_commit_hash: + # Get the commit dates + commit_date = repo.head.commit.committed_datetime + remote_commit_date = repo.refs[f'{remote_name}/{branch_name}'].object.committed_datetime + + # Compare the commit dates to determine if the local repository is behind the remote repository + if commit_date < remote_commit_date: + print("CUSTOM NODE CHECK: True") + else: + print("CUSTOM NODE CHECK: False") + except Exception as e: + print(e) + print("CUSTOM NODE CHECK: Error") + + +def switch_to_default_branch(repo): + show_result = repo.git.remote("show", "origin") + matches = re.search(r"\s*HEAD branch:\s*(.*)", show_result) + if matches: + default_branch = matches.group(1) + repo.git.checkout(default_branch) + + +def gitpull(path): + # Check if the path is a git repository + if not os.path.exists(os.path.join(path, '.git')): + raise ValueError('Not a git repository') + + # Pull the latest changes from the remote repository + repo = git.Repo(path) + if repo.is_dirty(): + repo.git.stash() + + commit_hash = repo.head.commit.hexsha + try: + if repo.head.is_detached: + switch_to_default_branch(repo) + + current_branch = repo.active_branch + branch_name = current_branch.name + + remote_name = 'origin' + remote = repo.remote(name=remote_name) + + remote.fetch() + remote_commit_hash = repo.refs[f'{remote_name}/{branch_name}'].object.hexsha + + if commit_hash == remote_commit_hash: + print("CUSTOM NODE PULL: None") # there is no update + repo.close() + return + + remote.pull() + + repo.git.submodule('update', '--init', '--recursive') + new_commit_hash = repo.head.commit.hexsha + + if commit_hash != new_commit_hash: + print("CUSTOM NODE PULL: Success") # update success + else: + print("CUSTOM NODE PULL: Fail") # update fail + except Exception as e: + print(e) + print("CUSTOM NODE PULL: Fail") # unknown git error + + repo.close() + + +def checkout_comfyui_hash(target_hash): + repo_path = os.path.join(working_directory, '..') # ComfyUI dir + + repo = git.Repo(repo_path) + commit_hash = repo.head.commit.hexsha + + if commit_hash != target_hash: + try: + print(f"CHECKOUT: ComfyUI [{target_hash}]") + repo.git.checkout(target_hash) + except git.GitCommandError as e: + print(f"Error checking out the ComfyUI: {str(e)}") + + +def checkout_custom_node_hash(git_custom_node_infos): + repo_name_to_url = {} + + for url in git_custom_node_infos.keys(): + repo_name = url.split('/')[-1] + + if repo_name.endswith('.git'): + repo_name = repo_name[:-4] + + repo_name_to_url[repo_name] = url + + for path in os.listdir(working_directory): + if path.endswith("ComfyUI-Manager"): + continue + + fullpath = os.path.join(working_directory, path) + + if os.path.isdir(fullpath): + is_disabled = path.endswith(".disabled") + + try: + git_dir = os.path.join(fullpath, '.git') + if not os.path.exists(git_dir): + continue + + need_checkout = False + repo_name = os.path.basename(fullpath) + + if repo_name.endswith('.disabled'): + repo_name = repo_name[:-9] + + item = git_custom_node_infos[repo_name_to_url[repo_name]] + if item['disabled'] and is_disabled: + pass + elif item['disabled'] and not is_disabled: + # disable + print(f"DISABLE: {repo_name}") + new_path = fullpath + ".disabled" + os.rename(fullpath, new_path) + pass + elif not item['disabled'] and is_disabled: + # enable + print(f"ENABLE: {repo_name}") + new_path = fullpath[:-9] + os.rename(fullpath, new_path) + fullpath = new_path + need_checkout = True + else: + need_checkout = True + + if need_checkout: + repo = git.Repo(fullpath) + commit_hash = repo.head.commit.hexsha + + if commit_hash != item['hash']: + print(f"CHECKOUT: {repo_name} [{item['hash']}]") + repo.git.checkout(item['hash']) + except Exception: + print(f"Failed to restore snapshots for the custom node '{path}'") + + # clone missing + for k, v in git_custom_node_infos.items(): + if not v['disabled']: + repo_name = k.split('/')[-1] + if repo_name.endswith('.git'): + repo_name = repo_name[:-4] + + path = os.path.join(working_directory, repo_name) + if not os.path.exists(path): + print(f"CLONE: {path}") + gitclone(working_directory, k, v['hash']) + + +def invalidate_custom_node_file(file_custom_node_infos): + global nodelist_path + + enabled_set = set() + for item in file_custom_node_infos: + if not item['disabled']: + enabled_set.add(item['filename']) + + for path in os.listdir(working_directory): + fullpath = os.path.join(working_directory, path) + + if not os.path.isdir(fullpath) and fullpath.endswith('.py'): + if path not in enabled_set: + print(f"DISABLE: {path}") + new_path = fullpath+'.disabled' + os.rename(fullpath, new_path) + + elif not os.path.isdir(fullpath) and fullpath.endswith('.py.disabled'): + path = path[:-9] + if path in enabled_set: + print(f"ENABLE: {path}") + new_path = fullpath[:-9] + os.rename(fullpath, new_path) + + # download missing: just support for 'copy' style + py_to_url = {} + + with open(nodelist_path, 'r', encoding="UTF-8") as json_file: + info = json.load(json_file) + for item in info['custom_nodes']: + if item['install_type'] == 'copy': + for url in item['files']: + if url.endswith('.py'): + py = url.split('/')[-1] + py_to_url[py] = url + + for item in file_custom_node_infos: + filename = item['filename'] + if not item['disabled']: + target_path = os.path.join(working_directory, filename) + + if not os.path.exists(target_path) and filename in py_to_url: + url = py_to_url[filename] + print(f"DOWNLOAD: {filename}") + download_url(url, working_directory) + + +def apply_snapshot(target): + try: + path = os.path.join(os.path.dirname(__file__), 'snapshots', f"{target}") + if os.path.exists(path): + with open(path, 'r', encoding="UTF-8") as json_file: + info = json.load(json_file) + + comfyui_hash = info['comfyui'] + git_custom_node_infos = info['git_custom_nodes'] + file_custom_node_infos = info['file_custom_nodes'] + + checkout_comfyui_hash(comfyui_hash) + checkout_custom_node_hash(git_custom_node_infos) + invalidate_custom_node_file(file_custom_node_infos) + + print("APPLY SNAPSHOT: True") + return + + print(f"Snapshot file not found: `{path}`") + print("APPLY SNAPSHOT: False") + except Exception as e: + print(e) + print("APPLY SNAPSHOT: False") + + +def setup_environment(): + config = configparser.ConfigParser() + config.read(config_path) + if 'default' in config and 'git_exe' in config['default'] and config['default']['git_exe'] != '': + git.Git().update_environment(GIT_PYTHON_GIT_EXECUTABLE=config['default']['git_exe']) + + +setup_environment() + + +try: + if sys.argv[1] == "--clone": + gitclone(sys.argv[2], sys.argv[3]) + elif sys.argv[1] == "--check": + gitcheck(sys.argv[2], False) + elif sys.argv[1] == "--fetch": + gitcheck(sys.argv[2], True) + elif sys.argv[1] == "--pull": + gitpull(sys.argv[2]) + elif sys.argv[1] == "--apply-snapshot": + apply_snapshot(sys.argv[2]) + sys.exit(0) +except Exception as e: + print(e) + sys.exit(-1) + + diff --git a/custom_nodes/ComfyUI-Manager/glob/cm_global.py b/custom_nodes/ComfyUI-Manager/glob/cm_global.py new file mode 100644 index 0000000000000000000000000000000000000000..4041bcb6d0b72d7c802a01274971a3be03e87bbb --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/glob/cm_global.py @@ -0,0 +1,112 @@ +import traceback + +# +# Global Var +# +# Usage: +# import cm_global +# cm_global.variables['comfyui.revision'] = 1832 +# print(f"log mode: {cm_global.variables['logger.enabled']}") +# +variables = {} + + +# +# Global API +# +# Usage: +# [register API] +# import cm_global +# +# def api_hello(msg): +# print(f"hello: {msg}") +# return msg +# +# cm_global.register_api('hello', api_hello) +# +# [use API] +# import cm_global +# +# test = cm_global.try_call(api='hello', msg='an example') +# print(f"'{test}' is returned") +# + +APIs = {} + + +def register_api(k, f): + global APIs + APIs[k] = f + + +def try_call(**kwargs): + if 'api' in kwargs: + api_name = kwargs['api'] + try: + api = APIs.get(api_name) + if api is not None: + del kwargs['api'] + return api(**kwargs) + else: + print(f"WARN: The '{kwargs['api']}' API has not been registered.") + except Exception as e: + print(f"ERROR: An exception occurred while calling the '{api_name}' API.") + raise e + else: + return None + + +# +# Extension Info +# +# Usage: +# import cm_global +# +# cm_global.extension_infos['my_extension'] = {'version': [0, 1], 'name': 'me', 'description': 'example extension', } +# +extension_infos = {} + +on_extension_registered_handlers = {} + + +def register_extension(extension_name, v): + global extension_infos + global on_extension_registered_handlers + extension_infos[extension_name] = v + + if extension_name in on_extension_registered_handlers: + for k, f in on_extension_registered_handlers[extension_name]: + try: + f(extension_name, v) + except Exception: + print(f"[ERROR] '{k}' on_extension_registered_handlers") + traceback.print_exc() + + del on_extension_registered_handlers[extension_name] + + +def add_on_extension_registered(k, extension_name, f): + global on_extension_registered_handlers + if extension_name in extension_infos: + try: + v = extension_infos[extension_name] + f(extension_name, v) + except Exception: + print(f"[ERROR] '{k}' on_extension_registered_handler") + traceback.print_exc() + else: + if extension_name not in on_extension_registered_handlers: + on_extension_registered_handlers[extension_name] = [] + + on_extension_registered_handlers[extension_name].append((k, f)) + + +def add_on_revision_detected(k, f): + if 'comfyui.revision' in variables: + try: + f(variables['comfyui.revision']) + except Exception: + print(f"[ERROR] '{k}' on_revision_detected_handler") + traceback.print_exc() + else: + variables['cm.on_revision_detected_handler'].append((k, f)) diff --git a/custom_nodes/ComfyUI-Manager/js/a1111-alter-downloader.js b/custom_nodes/ComfyUI-Manager/js/a1111-alter-downloader.js new file mode 100644 index 0000000000000000000000000000000000000000..65780a6b812d6a331f72acffdbcdf6c439e787ff --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/a1111-alter-downloader.js @@ -0,0 +1,566 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js" +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { install_checked_custom_node, manager_instance, rebootAPI } from "./common.js"; + +async function getAlterList() { + var mode = manager_instance.datasrc_combo.value; + + var skip_update = ""; + if(manager_instance.update_check_checkbox.checked) + skip_update = "&skip_update=true"; + + const response = await api.fetchApi(`/alternatives/getlist?mode=${mode}${skip_update}`); + + const data = await response.json(); + return data; +} + +export class AlternativesInstaller extends ComfyDialog { + static instance = null; + + install_buttons = []; + message_box = null; + data = null; + + clear() { + this.install_buttons = []; + this.message_box = null; + this.data = null; + } + + constructor(app, manager_dialog) { + super(); + this.manager_dialog = manager_dialog; + this.search_keyword = ''; + this.element = $el("div.comfy-modal", { parent: document.body }, []); + } + + startInstall(target) { + const self = AlternativesInstaller.instance; + + self.updateMessage(`
Installing '${target.title}'`); + } + + disableButtons() { + for(let i in this.install_buttons) { + this.install_buttons[i].disabled = true; + this.install_buttons[i].style.backgroundColor = 'gray'; + } + } + + apply_searchbox(data) { + let keyword = this.search_box.value.toLowerCase(); + for(let i in this.grid_rows) { + let data1 = this.grid_rows[i].data; + let data2 = data1.custom_node; + + if(!data2) + continue; + + let content = data1.tags.toLowerCase() + data1.description.toLowerCase() + data2.author.toLowerCase() + data2.description.toLowerCase() + data2.title.toLowerCase(); + + if(this.filter && this.filter != '*') { + if(this.filter != data2.installed) { + this.grid_rows[i].control.style.display = 'none'; + continue; + } + } + + if(keyword == "") + this.grid_rows[i].control.style.display = null; + else if(content.includes(keyword)) { + this.grid_rows[i].control.style.display = null; + } + else { + this.grid_rows[i].control.style.display = 'none'; + } + } + } + + async invalidateControl() { + this.clear(); + + // splash + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + + const msg = $el('div', {id:'custom-message'}, + [$el('br'), + 'The custom node DB is currently being updated, and updates to custom nodes are being checked for.', + $el('br'), + 'NOTE: Update only checks for extensions that have been fetched.', + $el('br')]); + msg.style.height = '100px'; + msg.style.verticalAlign = 'middle'; + this.element.appendChild(msg); + + // invalidate + this.data = (await getAlterList()).items; + + this.element.removeChild(msg); + + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + + this.createHeaderControls(); + await this.createGrid(); + this.apply_searchbox(this.data); + this.createBottomControls(); + } + + updateMessage(msg, btn_id) { + this.message_box.innerHTML = msg; + if(btn_id) { + const rebootButton = document.getElementById(btn_id); + const self = this; + rebootButton.addEventListener("click", + function() { + if(rebootAPI()) { + self.close(); + self.manager_dialog.close(); + } + }); + } + } + + invalidate_checks(is_checked, install_state) { + if(is_checked) { + for(let i in this.grid_rows) { + let data = this.grid_rows[i].data; + let checkbox = this.grid_rows[i].checkbox; + let buttons = this.grid_rows[i].buttons; + + checkbox.disabled = data.custom_node.installed != install_state; + + if(checkbox.disabled) { + for(let j in buttons) { + buttons[j].style.display = 'none'; + } + } + else { + for(let j in buttons) { + buttons[j].style.display = null; + } + } + } + + this.checkbox_all.disabled = false; + } + else { + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + if(checkbox.check) + return; // do nothing + } + + // every checkbox is unchecked -> enable all checkbox + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + let buttons = this.grid_rows[i].buttons; + checkbox.disabled = false; + + for(let j in buttons) { + buttons[j].style.display = null; + } + } + + this.checkbox_all.checked = false; + this.checkbox_all.disabled = true; + } + } + + check_all(is_checked) { + if(is_checked) { + // lookup first checked item's state + let check_state = null; + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + if(checkbox.checked) { + check_state = this.grid_rows[i].data.custom_node.installed; + } + } + + if(check_state == null) + return; + + // check only same state items + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + if(this.grid_rows[i].data.custom_node.installed == check_state) + checkbox.checked = true; + } + } + else { + // uncheck all + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + let buttons = this.grid_rows[i].buttons; + checkbox.checked = false; + checkbox.disabled = false; + + for(let j in buttons) { + buttons[j].style.display = null; + } + } + + this.checkbox_all.disabled = true; + } + } + + async createGrid() { + var grid = document.createElement('table'); + grid.setAttribute('id', 'alternatives-grid'); + + this.grid_rows = {}; + + let self = this; + + var thead = document.createElement('thead'); + var tbody = document.createElement('tbody'); + + var headerRow = document.createElement('tr'); + thead.style.position = "sticky"; + thead.style.top = "0px"; + thead.style.borderCollapse = "collapse"; + thead.style.tableLayout = "fixed"; + + var header0 = document.createElement('th'); + header0.style.width = "20px"; + this.checkbox_all = $el("input",{type:'checkbox', id:'check_all'},[]); + header0.appendChild(this.checkbox_all); + this.checkbox_all.checked = false; + this.checkbox_all.disabled = true; + this.checkbox_all.addEventListener('change', function() { self.check_all.call(self, self.checkbox_all.checked); }); + + var header1 = document.createElement('th'); + header1.innerHTML = '  ID  '; + header1.style.width = "20px"; + var header2 = document.createElement('th'); + header2.innerHTML = 'Tags'; + header2.style.width = "10%"; + var header3 = document.createElement('th'); + header3.innerHTML = 'Author'; + header3.style.width = "150px"; + var header4 = document.createElement('th'); + header4.innerHTML = 'Title'; + header4.style.width = "20%"; + var header5 = document.createElement('th'); + header5.innerHTML = 'Description'; + header5.style.width = "50%"; + var header6 = document.createElement('th'); + header6.innerHTML = 'Install'; + header6.style.width = "130px"; + + header1.style.position = "sticky"; + header1.style.top = "0px"; + header2.style.position = "sticky"; + header2.style.top = "0px"; + header3.style.position = "sticky"; + header3.style.top = "0px"; + header4.style.position = "sticky"; + header4.style.top = "0px"; + header5.style.position = "sticky"; + header5.style.top = "0px"; + + thead.appendChild(headerRow); + headerRow.appendChild(header0); + headerRow.appendChild(header1); + headerRow.appendChild(header2); + headerRow.appendChild(header3); + headerRow.appendChild(header4); + headerRow.appendChild(header5); + headerRow.appendChild(header6); + + headerRow.style.backgroundColor = "Black"; + headerRow.style.color = "White"; + headerRow.style.textAlign = "center"; + headerRow.style.width = "100%"; + headerRow.style.padding = "0"; + + grid.appendChild(thead); + grid.appendChild(tbody); + + if(this.data) + for (var i = 0; i < this.data.length; i++) { + const data = this.data[i]; + var dataRow = document.createElement('tr'); + + let data0 = document.createElement('td'); + let checkbox = $el("input",{type:'checkbox', id:`check_${i}`},[]); + data0.appendChild(checkbox); + checkbox.checked = false; + checkbox.addEventListener('change', function() { self.invalidate_checks.call(self, checkbox.checked, data.custom_node?.installed); }); + + var data1 = document.createElement('td'); + data1.style.textAlign = "center"; + data1.innerHTML = i+1; + var data2 = document.createElement('td'); + data2.innerHTML = ` ${data.tags}`; + var data3 = document.createElement('td'); + var data4 = document.createElement('td'); + if(data.custom_node) { + data3.innerHTML = ` ${data.custom_node.author}`; + data4.innerHTML = ` ${data.custom_node.title}`; + } + else { + data3.innerHTML = ` Unknown`; + data4.innerHTML = ` Unknown`; + } + var data5 = document.createElement('td'); + data5.innerHTML = data.description; + var data6 = document.createElement('td'); + data6.style.textAlign = "center"; + + var installBtn = document.createElement('button'); + var installBtn2 = null; + var installBtn3 = null; + + if(data.custom_node) { + this.install_buttons.push(installBtn); + + switch(data.custom_node.installed) { + case 'Disabled': + installBtn3 = document.createElement('button'); + installBtn3.innerHTML = 'Enable'; + installBtn3.style.backgroundColor = 'blue'; + installBtn3.style.color = 'white'; + this.install_buttons.push(installBtn3); + + installBtn.innerHTML = 'Uninstall'; + installBtn.style.backgroundColor = 'red'; + installBtn.style.color = 'white'; + break; + case 'Update': + installBtn2 = document.createElement('button'); + installBtn2.innerHTML = 'Update'; + installBtn2.style.backgroundColor = 'blue'; + installBtn2.style.color = 'white'; + this.install_buttons.push(installBtn2); + + installBtn3 = document.createElement('button'); + installBtn3.innerHTML = 'Disable'; + installBtn3.style.backgroundColor = 'MediumSlateBlue'; + installBtn3.style.color = 'white'; + this.install_buttons.push(installBtn3); + + installBtn.innerHTML = 'Uninstall'; + installBtn.style.backgroundColor = 'red'; + installBtn.style.color = 'white'; + break; + case 'True': + installBtn3 = document.createElement('button'); + installBtn3.innerHTML = 'Disable'; + installBtn3.style.backgroundColor = 'MediumSlateBlue'; + installBtn3.style.color = 'white'; + this.install_buttons.push(installBtn3); + + installBtn.innerHTML = 'Uninstall'; + installBtn.style.backgroundColor = 'red'; + installBtn.style.color = 'white'; + break; + case 'False': + installBtn.innerHTML = 'Install'; + installBtn.style.backgroundColor = 'black'; + installBtn.style.color = 'white'; + break; + default: + installBtn.innerHTML = 'Try Install'; + installBtn.style.backgroundColor = 'Gray'; + installBtn.style.color = 'white'; + } + + let j = i; + if(installBtn2 != null) { + installBtn2.style.width = "120px"; + installBtn2.addEventListener('click', function() { + install_checked_custom_node(self.grid_rows, j, AlternativesInstaller.instance, 'update'); + }); + + data6.appendChild(installBtn2); + } + + if(installBtn3 != null) { + installBtn3.style.width = "120px"; + installBtn3.addEventListener('click', function() { + install_checked_custom_node(self.grid_rows, j, AlternativesInstaller.instance, 'toggle_active'); + }); + + data6.appendChild(installBtn3); + } + + + installBtn.style.width = "120px"; + installBtn.addEventListener('click', function() { + if(this.innerHTML == 'Uninstall') { + if (confirm(`Are you sure uninstall ${data.title}?`)) { + install_checked_custom_node(self.grid_rows, j, AlternativesInstaller.instance, 'uninstall'); + } + } + else { + install_checked_custom_node(self.grid_rows, j, AlternativesInstaller.instance, 'install'); + } + }); + + data6.appendChild(installBtn); + } + + dataRow.style.backgroundColor = "var(--bg-color)"; + dataRow.style.color = "var(--fg-color)"; + dataRow.style.textAlign = "left"; + + dataRow.appendChild(data0); + dataRow.appendChild(data1); + dataRow.appendChild(data2); + dataRow.appendChild(data3); + dataRow.appendChild(data4); + dataRow.appendChild(data5); + dataRow.appendChild(data6); + tbody.appendChild(dataRow); + + let buttons = []; + if(installBtn) { + buttons.push(installBtn); + } + if(installBtn2) { + buttons.push(installBtn2); + } + if(installBtn3) { + buttons.push(installBtn3); + } + + this.grid_rows[i] = {data:data, buttons:buttons, checkbox:checkbox, control:dataRow}; + } + + const panel = document.createElement('div'); + panel.style.width = "100%"; + panel.appendChild(grid); + + function handleResize() { + const parentHeight = self.element.clientHeight; + const gridHeight = parentHeight - 200; + + grid.style.height = gridHeight + "px"; + } + window.addEventListener("resize", handleResize); + + grid.style.position = "relative"; + grid.style.display = "inline-block"; + grid.style.width = "100%"; + grid.style.height = "100%"; + grid.style.overflowY = "scroll"; + this.element.style.height = "85%"; + this.element.style.width = "80%"; + this.element.appendChild(panel); + + handleResize(); + } + + createFilterCombo() { + let combo = document.createElement("select"); + + combo.style.cssFloat = "left"; + combo.style.fontSize = "14px"; + combo.style.padding = "4px"; + combo.style.background = "black"; + combo.style.marginLeft = "2px"; + combo.style.width = "199px"; + combo.id = `combo-manger-filter`; + combo.style.borderRadius = "15px"; + + let items = + [ + { value:'*', text:'Filter: all' }, + { value:'Disabled', text:'Filter: disabled' }, + { value:'Update', text:'Filter: update' }, + { value:'True', text:'Filter: installed' }, + { value:'False', text:'Filter: not-installed' }, + ]; + + items.forEach(item => { + const option = document.createElement("option"); + option.value = item.value; + option.text = item.text; + combo.appendChild(option); + }); + + let self = this; + combo.addEventListener('change', function(event) { + self.filter = event.target.value; + self.apply_searchbox(); + }); + + if(self.filter) { + combo.value = self.filter; + } + + return combo; + } + + createHeaderControls() { + let self = this; + this.search_box = $el('input.cm-search-filter', {type:'text', id:'manager-alternode-search-box', placeholder:'input search keyword', value:this.search_keyword}, []); + this.search_box.style.height = "25px"; + this.search_box.onkeydown = (event) => { + if (event.key === 'Enter') { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + } + if (event.key === 'Escape') { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + } + }; + + let search_button = document.createElement("button"); + search_button.className = "cm-small-button"; + search_button.innerHTML = "Search"; + search_button.onclick = () => { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + }; + search_button.style.display = "inline-block"; + + let filter_control = this.createFilterCombo(); + filter_control.style.display = "inline-block"; + + let cell = $el('td', {width:'100%'}, [filter_control, this.search_box, ' ', search_button]); + let search_control = $el('table', {width:'100%'}, + [ + $el('tr', {}, [cell]) + ] + ); + + cell.style.textAlign = "right"; + this.element.appendChild(search_control); + } + + async createBottomControls() { + var close_button = document.createElement("button"); + close_button.className = "cm-small-button"; + close_button.innerHTML = "Close"; + close_button.onclick = () => { this.close(); } + close_button.style.display = "inline-block"; + + this.message_box = $el('div', {id:'alternatives-installer-message'}, [$el('br'), '']); + this.message_box.style.height = '60px'; + this.message_box.style.verticalAlign = 'middle'; + + this.element.appendChild(this.message_box); + this.element.appendChild(close_button); + } + + async show() { + try { + this.invalidateControl(); + this.element.style.display = "block"; + this.element.style.zIndex = 10001; + } + catch(exception) { + app.ui.dialog.show(`Failed to get alternatives list. / ${exception}`); + console.error(exception); + } + } +} diff --git a/custom_nodes/ComfyUI-Manager/js/cm-api.js b/custom_nodes/ComfyUI-Manager/js/cm-api.js new file mode 100644 index 0000000000000000000000000000000000000000..e65cb3480d4eab98b6a27db81148bc8124f91cdc --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/cm-api.js @@ -0,0 +1,54 @@ +import { api } from "../../scripts/api.js"; +import { app } from "../../scripts/app.js"; +import { sleep } from "./common.js"; + +async function tryInstallCustomNode(event) { + let msg = '-= [ComfyUI Manager] extension installation request =-\n\n'; + msg += `The '${event.detail.sender}' extension requires the installation of the '${event.detail.title}' extension. `; + + if(event.detail.target.installed == 'Disabled') { + msg += 'However, the extension is currently disabled. Would you like to enable it and reboot?' + } + else if(event.detail.target.installed == 'True') { + msg += 'However, it seems that the extension is in an import-fail state or is not compatible with the current version. Please address this issue.'; + } + else { + msg += `Would you like to install it and reboot?`; + } + + msg += `\n\nRequest message:\n${event.detail.msg}`; + + if(event.detail.target.installed == 'True') { + alert(msg); + return; + } + + let res = confirm(msg); + if(res) { + if(event.detail.target.installed == 'Disabled') { + const response = await api.fetchApi(`/customnode/toggle_active`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(event.detail.target) + }); + } + else { + await sleep(300); + app.ui.dialog.show(`Installing... '${event.detail.target.title}'`); + + const response = await api.fetchApi(`/customnode/install`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(event.detail.target) + }); + } + + api.fetchApi("/manager/reboot"); + + await sleep(300); + + app.ui.dialog.show(`Rebooting...`); + } +} + +api.addEventListener("cm-api-try-install-customnode", tryInstallCustomNode); diff --git a/custom_nodes/ComfyUI-Manager/js/comfyui-manager.js b/custom_nodes/ComfyUI-Manager/js/comfyui-manager.js new file mode 100644 index 0000000000000000000000000000000000000000..9f7224ec2ca1f02d92df060effdfb6a5b8f8da5a --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/comfyui-manager.js @@ -0,0 +1,1403 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js" +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { + ShareDialog, + SUPPORTED_OUTPUT_NODE_TYPES, + getPotentialOutputsAndOutputNodes, + ShareDialogChooser, + showOpenArtShareDialog, + showShareDialog, + showYouMLShareDialog +} from "./comfyui-share-common.js"; +import { OpenArtShareDialog } from "./comfyui-share-openart.js"; +import { CustomNodesInstaller } from "./custom-nodes-downloader.js"; +import { AlternativesInstaller } from "./a1111-alter-downloader.js"; +import { SnapshotManager } from "./snapshot.js"; +import { ModelInstaller } from "./model-downloader.js"; +import { manager_instance, setManagerInstance, install_via_git_url, install_pip, rebootAPI, free_models } from "./common.js"; +import { ComponentBuilderDialog, load_components, set_component_policy, getPureName } from "./components-manager.js"; +import { set_double_click_policy } from "./node_fixer.js"; + +var docStyle = document.createElement('style'); +docStyle.innerHTML = ` +#cm-manager-dialog { + width: 1000px; + height: 520px; + box-sizing: content-box; + z-index: 10000; +} + +.cb-widget { + width: 400px; + height: 25px; + box-sizing: border-box; + z-index: 10000; + margin-top: 10px; + margin-bottom: 5px; +} + +.cb-widget-input { + width: 305px; + height: 25px; + box-sizing: border-box; +} +.cb-widget-input:disabled { + background-color: #444444; + color: white; +} + +.cb-widget-input-label { + width: 90px; + height: 25px; + box-sizing: border-box; + color: white; + text-align: right; + display: inline-block; + margin-right: 5px; +} + +.cm-menu-container { + column-gap: 20px; + display: flex; + flex-wrap: wrap; + justify-content: center; + box-sizing: content-box; +} + +.cm-menu-column { + display: flex; + flex-direction: column; + flex: 1 1 auto; + width: 300px; + box-sizing: content-box; +} + +.cm-title { + background-color: black; + text-align: center; + height: 40px; + width: calc(100% - 10px); + font-weight: bold; + justify-content: center; + align-content: center; + vertical-align: middle; +} + +#cm-channel-badge { + color: white; + background-color: #AA0000; + width: 220px; + height: 23px; + font-size: 13px; + border-radius: 5px; + left: 5px; + top: 5px; + align-content: center; + justify-content: center; + text-align: center; + font-weight: bold; + float: left; + vertical-align: middle; + position: relative; +} + +#custom-nodes-grid a { + color: #5555FF; + font-weight: bold; + text-decoration: none; +} + +#custom-nodes-grid a:hover { + color: #7777FF; + text-decoration: underline; +} + +#external-models-grid a { + color: #5555FF; + font-weight: bold; + text-decoration: none; +} + +#external-models-grid a:hover { + color: #7777FF; + text-decoration: underline; +} + +#alternatives-grid a { + color: #5555FF; + font-weight: bold; + text-decoration: none; +} + +#alternatives-grid a:hover { + color: #7777FF; + text-decoration: underline; +} + +.cm-notice-board { + width: 290px; + height: 270px; + overflow: auto; + color: var(--input-text); + border: 1px solid var(--descrip-text); + padding: 5px 10px; + overflow-x: hidden; + box-sizing: content-box; +} + +.cm-notice-board > ul { + display: block; + list-style-type: disc; + margin-block-start: 1em; + margin-block-end: 1em; + margin-inline-start: 0px; + margin-inline-end: 0px; + padding-inline-start: 40px; +} + +.cm-conflicted-nodes-text { + background-color: #CCCC55 !important; + color: #AA3333 !important; + font-size: 10px; + border-radius: 5px; + padding: 10px; +} + +.cm-warn-note { + background-color: #101010 !important; + color: #FF3800 !important; + font-size: 13px; + border-radius: 5px; + padding: 10px; + overflow-x: hidden; + overflow: auto; +} + +.cm-info-note { + background-color: #101010 !important; + color: #FF3800 !important; + font-size: 13px; + border-radius: 5px; + padding: 10px; + overflow-x: hidden; + overflow: auto; +} +`; + +document.head.appendChild(docStyle); + +var update_comfyui_button = null; +var fetch_updates_button = null; +var update_all_button = null; +var badge_mode = "none"; +let share_option = 'all'; + +// copied style from https://github.com/pythongosssss/ComfyUI-Custom-Scripts +const style = ` +#workflowgallery-button { + width: 310px; + height: 27px; + padding: 0px !important; + position: relative; + overflow: hidden; + font-size: 17px !important; +} +#cm-nodeinfo-button { + width: 310px; + height: 27px; + padding: 0px !important; + position: relative; + overflow: hidden; + font-size: 17px !important; +} +#cm-manual-button { + width: 310px; + height: 27px; + position: relative; + overflow: hidden; +} + +.cm-button { + width: 310px; + height: 30px; + position: relative; + overflow: hidden; + font-size: 17px !important; +} + +.cm-experimental-button { + width: 290px; + height: 30px; + position: relative; + overflow: hidden; + font-size: 17px !important; +} + +.cm-experimental { + width: 310px; + border: 1px solid #555; + border-radius: 5px; + padding: 10px; + align-items: center; + text-align: center; + justify-content: center; + box-sizing: border-box; +} + +.cm-experimental-legend { + margin-top: -20px; + margin-left: 50%; + width:auto; + height:20px; + font-size: 13px; + font-weight: bold; + background-color: #990000; + color: #CCFFFF; + border-radius: 5px; + text-align: center; + transform: translateX(-50%); + display: block; +} + +.cm-menu-combo { + cursor: pointer; + width: 310px; + box-sizing: border-box; +} + +.cm-small-button { + width: 120px; + height: 30px; + position: relative; + overflow: hidden; + box-sizing: border-box; + font-size: 17px !important; +} + +#cm-install-customnodes-button { + width: 200px; + height: 30px; + position: relative; + overflow: hidden; + box-sizing: border-box; + font-size: 17px !important; +} + +.cm-search-filter { + width: 200px; + height: 30px !important; + position: relative; + overflow: hidden; + box-sizing: border-box; +} + +.cb-node-label { + width: 400px; + height:28px; + color: black; + background-color: #777777; + font-size: 18px; + text-align: center; + font-weight: bold; +} + +#cm-close-button { + width: calc(100% - 65px); + bottom: 10px; + position: absolute; + overflow: hidden; +} + +#cm-save-button { + width: calc(100% - 65px); + bottom:40px; + position: absolute; + overflow: hidden; +} +#cm-save-button:disabled { + background-color: #444444; +} + +.pysssss-workflow-arrow-2 { + position: absolute; + top: 0; + bottom: 0; + right: 0; + font-size: 12px; + display: flex; + align-items: center; + width: 24px; + justify-content: center; + background: rgba(255,255,255,0.1); + content: "▼"; +} +.pysssss-workflow-arrow-2:after { + content: "▼"; + } + .pysssss-workflow-arrow-2:hover { + filter: brightness(1.6); + background-color: var(--comfy-menu-bg); + } +.pysssss-workflow-popup-2 ~ .litecontextmenu { + transform: scale(1.3); +} +#workflowgallery-button-menu { + z-index: 10000000000 !important; +} +#cm-manual-button-menu { + z-index: 10000000000 !important; +} +`; + + + +async function init_badge_mode() { + api.fetchApi('/manager/badge_mode') + .then(response => response.text()) + .then(data => { badge_mode = data; }) +} + +async function init_share_option() { + api.fetchApi('/manager/share_option') + .then(response => response.text()) + .then(data => { + share_option = data || 'all'; + }); +} + +async function init_notice(notice) { + api.fetchApi('/manager/notice') + .then(response => response.text()) + .then(data => { + notice.innerHTML = data; + }) +} + +await init_badge_mode(); +await init_share_option(); + +async function fetchNicknames() { + const response1 = await api.fetchApi(`/customnode/getmappings?mode=local`); + const mappings = await response1.json(); + + let result = {}; + let nickname_patterns = []; + + for (let i in mappings) { + let item = mappings[i]; + var nickname; + if (item[1].nickname) { + nickname = item[1].nickname; + } + else if (item[1].title) { + nickname = item[1].title; + } + else { + nickname = item[1].title_aux; + } + + for (let j in item[0]) { + result[item[0][j]] = nickname; + } + + if(item[1].nodename_pattern) { + nickname_patterns.push([item[1].nodename_pattern, nickname]); + } + } + + return [result, nickname_patterns]; +} + +const [nicknames, nickname_patterns] = await fetchNicknames(); + +function getNickname(node, nodename) { + if(node.nickname) { + return node.nickname; + } + else { + if (nicknames[nodename]) { + node.nickname = nicknames[nodename]; + } + else if(node.getInnerNodes) { + let pure_name = getPureName(node); + let groupNode = app.graph.extra?.groupNodes?.[pure_name]; + if(groupNode) { + let packname = groupNode.packname; + node.nickname = packname; + } + return node.nickname; + } + else { + for(let i in nickname_patterns) { + let item = nickname_patterns[i]; + if(nodename.match(item[0])) { + node.nickname = item[1]; + } + } + } + + return node.nickname; + } +} + +function drawBadge(node, orig, restArgs) { + let ctx = restArgs[0]; + const r = orig?.apply?.(node, restArgs); + + if (!node.flags.collapsed && badge_mode != 'none' && node.constructor.title_mode != LiteGraph.NO_TITLE) { + let text = ""; + if (badge_mode.startsWith('id_nick')) + text = `#${node.id} `; + + let nick = node.getNickname(); + if (nick) { + if (nick == 'ComfyUI') { + if(badge_mode.endsWith('hide')) { + nick = ""; + } + else { + nick = "🦊" + } + } + + if (nick.length > 25) { + text += nick.substring(0, 23) + ".."; + } + else { + text += nick; + } + } + + if (text != "") { + let fgColor = "white"; + let bgColor = "#0F1F0F"; + let visible = true; + + ctx.save(); + ctx.font = "12px sans-serif"; + const sz = ctx.measureText(text); + ctx.fillStyle = bgColor; + ctx.beginPath(); + ctx.roundRect(node.size[0] - sz.width - 12, -LiteGraph.NODE_TITLE_HEIGHT - 20, sz.width + 12, 20, 5); + ctx.fill(); + + ctx.fillStyle = fgColor; + ctx.fillText(text, node.size[0] - sz.width - 6, -LiteGraph.NODE_TITLE_HEIGHT - 6); + ctx.restore(); + + if (node.has_errors) { + ctx.save(); + ctx.font = "bold 14px sans-serif"; + const sz2 = ctx.measureText(node.type); + ctx.fillStyle = 'white'; + ctx.fillText(node.type, node.size[0] / 2 - sz2.width / 2, node.size[1] / 2); + ctx.restore(); + } + } + } + return r; +} + + +async function updateComfyUI() { + let prev_text = update_comfyui_button.innerText; + update_comfyui_button.innerText = "Updating ComfyUI..."; + update_comfyui_button.disabled = true; + update_comfyui_button.style.backgroundColor = "gray"; + + try { + const response = await api.fetchApi('/comfyui_manager/update_comfyui'); + + if (response.status == 400) { + app.ui.dialog.show('Failed to update ComfyUI.'); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + + if (response.status == 201) { + app.ui.dialog.show('ComfyUI has been successfully updated.'); + app.ui.dialog.element.style.zIndex = 10010; + } + else { + app.ui.dialog.show('ComfyUI is already up to date with the latest version.'); + app.ui.dialog.element.style.zIndex = 10010; + } + + return true; + } + catch (exception) { + app.ui.dialog.show(`Failed to update ComfyUI / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + update_comfyui_button.disabled = false; + update_comfyui_button.innerText = prev_text; + update_comfyui_button.style.backgroundColor = ""; + } +} + +async function fetchUpdates(update_check_checkbox) { + let prev_text = fetch_updates_button.innerText; + fetch_updates_button.innerText = "Fetching updates..."; + fetch_updates_button.disabled = true; + fetch_updates_button.style.backgroundColor = "gray"; + + try { + var mode = manager_instance.datasrc_combo.value; + + const response = await api.fetchApi(`/customnode/fetch_updates?mode=${mode}`); + + if (response.status != 200 && response.status != 201) { + app.ui.dialog.show('Failed to fetch updates.'); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + + if (response.status == 201) { + app.ui.dialog.show("There is an updated extension available.

NOTE:
Fetch Updates is not an update.
Please update from

"); + + const button = document.getElementById('cm-install-customnodes-button'); + button.addEventListener("click", + async function() { + app.ui.dialog.close(); + + if(!CustomNodesInstaller.instance) + CustomNodesInstaller.instance = new CustomNodesInstaller(app, self); + + await CustomNodesInstaller.instance.show(CustomNodesInstaller.ShowMode.UPDATE); + } + ); + + app.ui.dialog.element.style.zIndex = 10010; + update_check_checkbox.checked = false; + } + else { + app.ui.dialog.show('All extensions are already up-to-date with the latest versions.'); + app.ui.dialog.element.style.zIndex = 10010; + } + + return true; + } + catch (exception) { + app.ui.dialog.show(`Failed to update custom nodes / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + fetch_updates_button.disabled = false; + fetch_updates_button.innerText = prev_text; + fetch_updates_button.style.backgroundColor = ""; + } +} + +async function updateAll(update_check_checkbox, manager_dialog) { + let prev_text = update_all_button.innerText; + update_all_button.innerText = "Updating all...(ComfyUI)"; + update_all_button.disabled = true; + update_all_button.style.backgroundColor = "gray"; + + try { + var mode = manager_instance.datasrc_combo.value; + + update_all_button.innerText = "Updating all..."; + const response1 = await api.fetchApi('/comfyui_manager/update_comfyui'); + const response2 = await api.fetchApi(`/customnode/update_all?mode=${mode}`); + + if (response1.status != 200 && response2.status != 201) { + app.ui.dialog.show('Failed to update ComfyUI or several extensions.

See terminal log.
'); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + if(response1.status == 201 || response2.status == 201) { + const update_info = await response2.json(); + + let failed_list = ""; + if(update_info.failed.length > 0) { + failed_list = "
FAILED: "+update_info.failed.join(", "); + } + + let updated_list = ""; + if(update_info.updated.length > 0) { + updated_list = "
UPDATED: "+update_info.updated.join(", "); + } + + app.ui.dialog.show( + "ComfyUI and all extensions have been updated to the latest version.
To apply the updated custom node, please ComfyUI. And refresh browser.
" + +failed_list + +updated_list + ); + + const rebootButton = document.getElementById('cm-reboot-button'); + rebootButton.addEventListener("click", + function() { + if(rebootAPI()) { + manager_dialog.close(); + } + }); + + app.ui.dialog.element.style.zIndex = 10010; + } + else { + app.ui.dialog.show('ComfyUI and all extensions are already up-to-date with the latest versions.'); + app.ui.dialog.element.style.zIndex = 10010; + } + + return true; + } + catch (exception) { + app.ui.dialog.show(`Failed to update ComfyUI or several extensions / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + update_all_button.disabled = false; + update_all_button.innerText = prev_text; + update_all_button.style.backgroundColor = ""; + } +} + +function newDOMTokenList(initialTokens) { + const tmp = document.createElement(`div`); + + const classList = tmp.classList; + if (initialTokens) { + initialTokens.forEach(token => { + classList.add(token); + }); + } + + return classList; + } + +/** + * Check whether the node is a potential output node (img, gif or video output) + */ +const isOutputNode = (node) => { + return SUPPORTED_OUTPUT_NODE_TYPES.includes(node.type); +} + +// ----------- +class ManagerMenuDialog extends ComfyDialog { + createControlsMid() { + let self = this; + + update_comfyui_button = + $el("button.cm-button", { + type: "button", + textContent: "Update ComfyUI", + onclick: + () => updateComfyUI() + }); + + fetch_updates_button = + $el("button.cm-button", { + type: "button", + textContent: "Fetch Updates", + onclick: + () => fetchUpdates(this.update_check_checkbox) + }); + + update_all_button = + $el("button.cm-button", { + type: "button", + textContent: "Update All", + onclick: + () => updateAll(this.update_check_checkbox, self) + }); + + const res = + [ + $el("button.cm-button", { + type: "button", + textContent: "Install Custom Nodes", + onclick: + () => { + if(!CustomNodesInstaller.instance) + CustomNodesInstaller.instance = new CustomNodesInstaller(app, self); + CustomNodesInstaller.instance.show(CustomNodesInstaller.ShowMode.NORMAL); + } + }), + + $el("button.cm-button", { + type: "button", + textContent: "Install Missing Custom Nodes", + onclick: + () => { + if(!CustomNodesInstaller.instance) + CustomNodesInstaller.instance = new CustomNodesInstaller(app, self); + CustomNodesInstaller.instance.show(CustomNodesInstaller.ShowMode.MISSING_NODES); + } + }), + + $el("button.cm-button", { + type: "button", + textContent: "Install Models", + onclick: + () => { + if(!ModelInstaller.instance) + ModelInstaller.instance = new ModelInstaller(app, self); + ModelInstaller.instance.show(); + } + }), + + $el("button.cm-button", { + type: "button", + textContent: "Install via Git URL", + onclick: () => { + var url = prompt("Please enter the URL of the Git repository to install", ""); + + if (url !== null) { + install_via_git_url(url, self); + } + } + }), + + $el("br", {}, []), + update_all_button, + update_comfyui_button, + fetch_updates_button, + + $el("br", {}, []), + $el("button.cm-button", { + type: "button", + textContent: "Alternatives of A1111", + onclick: + () => { + if(!AlternativesInstaller.instance) + AlternativesInstaller.instance = new AlternativesInstaller(app, self); + AlternativesInstaller.instance.show(); + } + }) + ]; + + return res; + } + + createControlsLeft() { + let self = this; + + this.update_check_checkbox = $el("input",{type:'checkbox', id:"skip_update_check"},[]) + const uc_checkbox_text = $el("label",{for:"skip_update_check"},[" Skip update check"]) + uc_checkbox_text.style.color = "var(--fg-color)"; + uc_checkbox_text.style.cursor = "pointer"; + this.update_check_checkbox.checked = true; + + // db mode + this.datasrc_combo = document.createElement("select"); + this.datasrc_combo.setAttribute("title", "Configure where to retrieve node/model information. If set to 'local,' the channel is ignored, and if set to 'channel (remote),' it fetches the latest information each time the list is opened."); + this.datasrc_combo.className = "cm-menu-combo"; + this.datasrc_combo.appendChild($el('option', { value: 'cache', text: 'DB: Channel (1day cache)' }, [])); + this.datasrc_combo.appendChild($el('option', { value: 'local', text: 'DB: Local' }, [])); + this.datasrc_combo.appendChild($el('option', { value: 'url', text: 'DB: Channel (remote)' }, [])); + + // preview method + let preview_combo = document.createElement("select"); + preview_combo.setAttribute("title", "Configure how latent variables will be decoded during preview in the sampling process."); + preview_combo.className = "cm-menu-combo"; + preview_combo.appendChild($el('option', { value: 'auto', text: 'Preview method: Auto' }, [])); + preview_combo.appendChild($el('option', { value: 'taesd', text: 'Preview method: TAESD (slow)' }, [])); + preview_combo.appendChild($el('option', { value: 'latent2rgb', text: 'Preview method: Latent2RGB (fast)' }, [])); + preview_combo.appendChild($el('option', { value: 'none', text: 'Preview method: None (very fast)' }, [])); + + api.fetchApi('/manager/preview_method') + .then(response => response.text()) + .then(data => { preview_combo.value = data; }); + + preview_combo.addEventListener('change', function (event) { + api.fetchApi(`/manager/preview_method?value=${event.target.value}`); + }); + + // nickname + let badge_combo = document.createElement("select"); + badge_combo.setAttribute("title", "Configure the content to be displayed on the badge at the top right corner of the node. The ID is the identifier of the node. If 'hide built-in' is selected, both unknown nodes and built-in nodes will be omitted, making them indistinguishable"); + badge_combo.className = "cm-menu-combo"; + badge_combo.appendChild($el('option', { value: 'none', text: 'Badge: None' }, [])); + badge_combo.appendChild($el('option', { value: 'nick', text: 'Badge: Nickname' }, [])); + badge_combo.appendChild($el('option', { value: 'nick_hide', text: 'Badge: Nickname (hide built-in)' }, [])); + badge_combo.appendChild($el('option', { value: 'id_nick', text: 'Badge: #ID Nickname' }, [])); + badge_combo.appendChild($el('option', { value: 'id_nick_hide', text: 'Badge: #ID Nickname (hide built-in)' }, [])); + + api.fetchApi('/manager/badge_mode') + .then(response => response.text()) + .then(data => { badge_combo.value = data; badge_mode = data; }); + + badge_combo.addEventListener('change', function (event) { + api.fetchApi(`/manager/badge_mode?value=${event.target.value}`); + badge_mode = event.target.value; + app.graph.setDirtyCanvas(true); + }); + + // channel + let channel_combo = document.createElement("select"); + channel_combo.setAttribute("title", "Configure the channel for retrieving data from the Custom Node list (including missing nodes) or the Model list. Note that the badge utilizes local information."); + channel_combo.className = "cm-menu-combo"; + api.fetchApi('/manager/channel_url_list') + .then(response => response.json()) + .then(async data => { + try { + let urls = data.list; + for (let i in urls) { + if (urls[i] != '') { + let name_url = urls[i].split('::'); + channel_combo.appendChild($el('option', { value: name_url[0], text: `Channel: ${name_url[0]}` }, [])); + } + } + + channel_combo.addEventListener('change', function (event) { + api.fetchApi(`/manager/channel_url_list?value=${event.target.value}`); + }); + + channel_combo.value = data.selected; + } + catch (exception) { + + } + }); + + // default ui state + let default_ui_combo = document.createElement("select"); + default_ui_combo.setAttribute("title", "Set the default state to be displayed in the main menu when the browser starts."); + default_ui_combo.className = "cm-menu-combo"; + default_ui_combo.appendChild($el('option', { value: 'none', text: 'Default UI: None' }, [])); + default_ui_combo.appendChild($el('option', { value: 'history', text: 'Default UI: History' }, [])); + default_ui_combo.appendChild($el('option', { value: 'queue', text: 'Default UI: Queue' }, [])); + api.fetchApi('/manager/default_ui') + .then(response => response.text()) + .then(data => { default_ui_combo.value = data; }); + + default_ui_combo.addEventListener('change', function (event) { + api.fetchApi(`/manager/default_ui?value=${event.target.value}`); + }); + + + // share + let share_combo = document.createElement("select"); + share_combo.setAttribute("title", "Hide the share button in the main menu or set the default action upon clicking it. Additionally, configure the default share site when sharing via the context menu's share button."); + share_combo.className = "cm-menu-combo"; + const share_options = [ + ['none', 'None'], + ['openart', 'OpenArt AI'], + ['youml', 'YouML'], + ['matrix', 'Matrix Server'], + ['comfyworkflows', 'ComfyWorkflows'], + ['all', 'All'], + ]; + for (const option of share_options) { + share_combo.appendChild($el('option', { value: option[0], text: `Share: ${option[1]}` }, [])); + } + + // default ui state + let component_policy_combo = document.createElement("select"); + component_policy_combo.setAttribute("title", "When loading the workflow, configure which version of the component to use."); + component_policy_combo.className = "cm-menu-combo"; + component_policy_combo.appendChild($el('option', { value: 'workflow', text: 'Component: Use workflow version' }, [])); + component_policy_combo.appendChild($el('option', { value: 'higher', text: 'Component: Use higher version' }, [])); + component_policy_combo.appendChild($el('option', { value: 'mine', text: 'Component: Use my version' }, [])); + api.fetchApi('/manager/component/policy') + .then(response => response.text()) + .then(data => { + component_policy_combo.value = data; + set_component_policy(data); + }); + + component_policy_combo.addEventListener('change', function (event) { + api.fetchApi(`/manager/component/policy?value=${event.target.value}`); + set_component_policy(event.target.value); + }); + + let dbl_click_policy_combo = document.createElement("select"); + dbl_click_policy_combo.setAttribute("title", "When loading the workflow, configure which version of the component to use."); + dbl_click_policy_combo.className = "cm-menu-combo"; + dbl_click_policy_combo.appendChild($el('option', { value: 'none', text: 'Double-Click: None' }, [])); + dbl_click_policy_combo.appendChild($el('option', { value: 'copy-all', text: 'Double-Click: Copy All Connections' }, [])); + dbl_click_policy_combo.appendChild($el('option', { value: 'copy-input', text: 'Double-Click: Copy Input Connections' }, [])); + dbl_click_policy_combo.appendChild($el('option', { value: 'possible-input', text: 'Double-Click: Possible Input Connections' }, [])); + dbl_click_policy_combo.appendChild($el('option', { value: 'dual', text: 'Double-Click: Possible(left) + Copy(right)' }, [])); + + api.fetchApi('/manager/dbl_click/policy') + .then(response => response.text()) + .then(data => { + dbl_click_policy_combo.value = data; + set_double_click_policy(data); + }); + + dbl_click_policy_combo.addEventListener('change', function (event) { + api.fetchApi(`/manager/dbl_click/policy?value=${event.target.value}`); + set_double_click_policy(event.target.value); + }); + + api.fetchApi('/manager/share_option') + .then(response => response.text()) + .then(data => { + share_combo.value = data || 'all'; + share_option = data || 'all'; + }); + + share_combo.addEventListener('change', function (event) { + const value = event.target.value; + share_option = value; + api.fetchApi(`/manager/share_option?value=${value}`); + const shareButton = document.getElementById("shareButton"); + if (value === 'none') { + shareButton.style.display = "none"; + } else { + shareButton.style.display = "inline-block"; + } + }); + + return [ + $el("div", {}, [this.update_check_checkbox, uc_checkbox_text]), + $el("br", {}, []), + this.datasrc_combo, + channel_combo, + preview_combo, + badge_combo, + default_ui_combo, + share_combo, + component_policy_combo, + dbl_click_policy_combo, + $el("br", {}, []), + + $el("br", {}, []), + $el("filedset.cm-experimental", {}, [ + $el("legend.cm-experimental-legend", {}, ["EXPERIMENTAL"]), + $el("button.cm-experimental-button", { + type: "button", + textContent: "Snapshot Manager", + onclick: + () => { + if(!SnapshotManager.instance) + SnapshotManager.instance = new SnapshotManager(app, self); + SnapshotManager.instance.show(); + } + }), + $el("button.cm-experimental-button", { + type: "button", + textContent: "Install PIP packages", + onclick: + () => { + var url = prompt("Please enumerate the pip packages to be installed.\n\nExample: insightface opencv-python-headless>=4.1.1\n", ""); + + if (url !== null) { + install_pip(url, self); + } + } + }), + $el("button.cm-experimental-button", { + type: "button", + textContent: "Unload models", + onclick: () => { free_models(); } + }) + ]), + ]; + } + + createControlsRight() { + const elts = [ + $el("button.cm-button", { + id: 'cm-manual-button', + type: "button", + textContent: "Community Manual", + onclick: () => { window.open("https://blenderneko.github.io/ComfyUI-docs/", "comfyui-community-manual"); } + }, [ + $el("div.pysssss-workflow-arrow-2", { + id: `cm-manual-button-arrow`, + onclick: (e) => { + e.preventDefault(); + e.stopPropagation(); + + LiteGraph.closeAllContextMenus(); + const menu = new LiteGraph.ContextMenu( + [ + { + title: "Comfy Custom Node How To", + callback: () => { window.open("https://github.com/chrisgoringe/Comfy-Custom-Node-How-To/wiki/aaa_index", "comfyui-community-manual1"); }, + }, + { + title: "ComfyUI Guide To Making Custom Nodes", + callback: () => { window.open("https://github.com/Suzie1/ComfyUI_Guide_To_Making_Custom_Nodes/wiki", "comfyui-community-manual2"); }, + }, + { + title: "ComfyUI Examples", + callback: () => { window.open("https://comfyanonymous.github.io/ComfyUI_examples", "comfyui-community-manual3"); }, + }, + { + title: "Close", + callback: () => { + LiteGraph.closeAllContextMenus(); + }, + } + ], + { + event: e, + scale: 1.3, + }, + window + ); + // set the id so that we can override the context menu's z-index to be above the comfyui manager menu + menu.root.id = "cm-manual-button-menu"; + menu.root.classList.add("pysssss-workflow-popup-2"); + }, + }) + ]), + + $el("button", { + id: 'workflowgallery-button', + type: "button", + style: { + ...(localStorage.getItem("wg_last_visited") ? {height: '50px'} : {}) + }, + onclick: (e) => { + const last_visited_site = localStorage.getItem("wg_last_visited") + if (!!last_visited_site) { + window.open(last_visited_site, last_visited_site); + } else { + this.handleWorkflowGalleryButtonClick(e) + } + }, + }, [ + $el("p", { + textContent: 'Workflow Gallery', + style: { + 'text-align': 'center', + 'color': 'white', + 'font-size': '18px', + 'margin': 0, + 'padding': 0, + } + }, [ + $el("p", { + id: 'workflowgallery-button-last-visited-label', + textContent: `(${localStorage.getItem("wg_last_visited") ? localStorage.getItem("wg_last_visited").split('/')[2] : ''})`, + style: { + 'text-align': 'center', + 'color': 'white', + 'font-size': '12px', + 'margin': 0, + 'padding': 0, + } + }) + ]), + $el("div.pysssss-workflow-arrow-2", { + id: `comfyworkflows-button-arrow`, + onclick: this.handleWorkflowGalleryButtonClick + }) + ]), + + $el("button.cm-button", { + id: 'cm-nodeinfo-button', + type: "button", + textContent: "Nodes Info", + onclick: () => { window.open("https://ltdrdata.github.io/", "comfyui-node-info"); } + }), + $el("br", {}, []), + ]; + + var textarea = document.createElement("div"); + textarea.className = "cm-notice-board"; + elts.push(textarea); + + init_notice(textarea); + + return elts; + } + + constructor() { + super(); + + const close_button = $el("button", { id: "cm-close-button", type: "button", textContent: "Close", onclick: () => this.close() }); + + const content = + $el("div.comfy-modal-content", + [ + $el("tr.cm-title", {}, [ + $el("font", {size:6, color:"white"}, [`ComfyUI Manager Menu`])] + ), + $el("br", {}, []), + $el("div.cm-menu-container", + [ + $el("div.cm-menu-column", [...this.createControlsLeft()]), + $el("div.cm-menu-column", [...this.createControlsMid()]), + $el("div.cm-menu-column", [...this.createControlsRight()]) + ]), + + $el("br", {}, []), + close_button, + ] + ); + + content.style.width = '100%'; + content.style.height = '100%'; + + this.element = $el("div.comfy-modal", { id:'cm-manager-dialog', parent: document.body }, [ content ]); + } + + show() { + this.element.style.display = "block"; + } + + handleWorkflowGalleryButtonClick(e) { + e.preventDefault(); + e.stopPropagation(); + LiteGraph.closeAllContextMenus(); + + // Modify the style of the button so that the UI can indicate the last + // visited site right away. + const modifyButtonStyle = (url) => { + const workflowGalleryButton = document.getElementById('workflowgallery-button'); + workflowGalleryButton.style.height = '50px'; + const lastVisitedLabel = document.getElementById('workflowgallery-button-last-visited-label'); + lastVisitedLabel.textContent = `(${url.split('/')[2]})`; + } + + const menu = new LiteGraph.ContextMenu( + [ + { + title: "Share your art", + callback: () => { + if (share_option === 'openart') { + showOpenArtShareDialog(); + return; + } else if (share_option === 'matrix' || share_option === 'comfyworkflows') { + showShareDialog(share_option); + return; + } else if (share_option === 'youml') { + showYouMLShareDialog(); + return; + } + + if (!ShareDialogChooser.instance) { + ShareDialogChooser.instance = new ShareDialogChooser(); + } + ShareDialogChooser.instance.show(); + }, + }, + { + title: "Open 'openart.ai'", + callback: () => { + const url = "https://openart.ai/workflows/dev"; + localStorage.setItem("wg_last_visited", url); + window.open(url, url); + modifyButtonStyle(url); + }, + }, + { + title: "Open 'youml.com'", + callback: () => { + const url = "https://youml.com/?from=comfyui-share"; + localStorage.setItem("wg_last_visited", url); + window.open(url, url); + modifyButtonStyle(url); + }, + }, + { + title: "Open 'comfyworkflows.com'", + callback: () => { + const url = "https://comfyworkflows.com/"; + localStorage.setItem("wg_last_visited", url); + window.open(url, url); + modifyButtonStyle(url); + }, + }, + { + title: "Open 'flowt.ai'", + callback: () => { + const url = "https://flowt.ai/"; + localStorage.setItem("wg_last_visited", url); + window.open(url, url); + modifyButtonStyle(url); + }, + }, + { + title: "Close", + callback: () => { + LiteGraph.closeAllContextMenus(); + }, + } + ], + { + event: e, + scale: 1.3, + }, + window + ); + // set the id so that we can override the context menu's z-index to be above the comfyui manager menu + menu.root.id = "workflowgallery-button-menu"; + menu.root.classList.add("pysssss-workflow-popup-2"); + } +} + + +app.registerExtension({ + name: "Comfy.ManagerMenu", + init() { + $el("style", { + textContent: style, + parent: document.head, + }); + }, + async setup() { + let orig_clear = app.graph.clear; + app.graph.clear = function () { + orig_clear.call(app.graph); + load_components(); + }; + + load_components(); + + const menu = document.querySelector(".comfy-menu"); + const separator = document.createElement("hr"); + + separator.style.margin = "20px 0"; + separator.style.width = "100%"; + menu.append(separator); + + const managerButton = document.createElement("button"); + managerButton.textContent = "Manager"; + managerButton.onclick = () => { + if(!manager_instance) + setManagerInstance(new ManagerMenuDialog()); + manager_instance.show(); + } + menu.append(managerButton); + + + const shareButton = document.createElement("button"); + shareButton.id = "shareButton"; + shareButton.textContent = "Share"; + shareButton.onclick = () => { + if (share_option === 'openart') { + showOpenArtShareDialog(); + return; + } else if (share_option === 'matrix' || share_option === 'comfyworkflows') { + showShareDialog(share_option); + return; + } else if (share_option === 'youml') { + showYouMLShareDialog(); + return; + } + + if(!ShareDialogChooser.instance) { + ShareDialogChooser.instance = new ShareDialogChooser(); + } + ShareDialogChooser.instance.show(); + } + // make the background color a gradient of blue to green + shareButton.style.background = "linear-gradient(90deg, #00C9FF 0%, #92FE9D 100%)"; + shareButton.style.color = "black"; + + // Load share option from local storage to determine whether to show + // the share button. + const shouldShowShareButton = share_option !== 'none'; + shareButton.style.display = shouldShowShareButton ? "inline-block" : "none"; + + menu.append(shareButton); + }, + + async beforeRegisterNodeDef(nodeType, nodeData, app) { + this._addExtraNodeContextMenu(nodeType, app); + }, + + async nodeCreated(node, app) { + if(!node.badge_enabled) { + node.getNickname = function () { return getNickname(node, node.comfyClass.trim()) }; + let orig = node.onDrawForeground; + if(!orig) + orig = node.__proto__.onDrawForeground; + + node.onDrawForeground = function (ctx) { + drawBadge(node, orig, arguments) + }; + node.badge_enabled = true; + } + }, + + async loadedGraphNode(node, app) { + if(!node.badge_enabled) { + const orig = node.onDrawForeground; + node.getNickname = function () { return getNickname(node, node.type.trim()) }; + node.onDrawForeground = function (ctx) { drawBadge(node, orig, arguments) }; + } + }, + + _addExtraNodeContextMenu(node, app) { + const origGetExtraMenuOptions = node.prototype.getExtraMenuOptions; + node.prototype.cm_menu_added = true; + node.prototype.getExtraMenuOptions = function (_, options) { + origGetExtraMenuOptions?.apply?.(this, arguments); + + if (node.category.startsWith('group nodes/')) { + options.push({ + content: "Save As Component", + callback: (obj) => { + if (!ComponentBuilderDialog.instance) { + ComponentBuilderDialog.instance = new ComponentBuilderDialog(); + } + ComponentBuilderDialog.instance.target_node = node; + ComponentBuilderDialog.instance.show(); + } + }, null); + } + + if (isOutputNode(node)) { + const { potential_outputs } = getPotentialOutputsAndOutputNodes([this]); + const hasOutput = potential_outputs.length > 0; + + // Check if the previous menu option is `null`. If it's not, + // then we need to add a `null` as a separator. + if (options[options.length - 1] !== null) { + options.push(null); + } + + options.push({ + content: "🏞️ Share Output", + disabled: !hasOutput, + callback: (obj) => { + if (!ShareDialog.instance) { + ShareDialog.instance = new ShareDialog(); + } + const shareButton = document.getElementById("shareButton"); + if (shareButton) { + const currentNode = this; + if (!OpenArtShareDialog.instance) { + OpenArtShareDialog.instance = new OpenArtShareDialog(); + } + OpenArtShareDialog.instance.selectedNodeId = currentNode.id; + if (!ShareDialog.instance) { + ShareDialog.instance = new ShareDialog(share_option); + } + ShareDialog.instance.selectedNodeId = currentNode.id; + shareButton.click(); + } + } + }, null); + } + } + }, +}); + + +async function set_default_ui() +{ + let res = await api.fetchApi('/manager/default_ui'); + if(res.status == 200) { + let mode = await res.text(); + switch(mode) { + case 'history': + app.ui.queue.hide(); + app.ui.history.show(); + break; + case 'queue': + app.ui.queue.show(); + app.ui.history.hide(); + break; + default: + // do nothing + break; + } + } +} + +set_default_ui(); \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/js/comfyui-share-common.js b/custom_nodes/ComfyUI-Manager/js/comfyui-share-common.js new file mode 100644 index 0000000000000000000000000000000000000000..74ab6af6fc453175aa7708d8f5a6baf18fbecb13 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/comfyui-share-common.js @@ -0,0 +1,1005 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js"; +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { OpenArtShareDialog } from "./comfyui-share-openart.js"; +import { YouMLShareDialog } from "./comfyui-share-youml.js"; + +export const SUPPORTED_OUTPUT_NODE_TYPES = [ + "PreviewImage", + "SaveImage", + "VHS_VideoCombine", + "ADE_AnimateDiffCombine", + "SaveAnimatedWEBP", + "CR Image Output" +] + +var docStyle = document.createElement('style'); +docStyle.innerHTML = ` +.cm-menu-container { + column-gap: 20px; + display: flex; + flex-wrap: wrap; + justify-content: center; +} + +.cm-menu-column { + display: flex; + flex-direction: column; +} + +.cm-title { + padding: 10px 10px 0 10p; + background-color: black; + text-align: center; + height: 45px; +} +`; +document.head.appendChild(docStyle); + +export function getPotentialOutputsAndOutputNodes(nodes) { + const potential_outputs = []; + const potential_output_nodes = []; + + // iterate over the array of nodes to find the ones that are marked as SaveImage + // TODO: Add support for AnimateDiffCombine, etc. nodes that save videos/gifs, etc. + for (let i = 0; i < nodes.length; i++) { + const node = nodes[i]; + if (!SUPPORTED_OUTPUT_NODE_TYPES.includes(node.type)) { + continue; + } + + if (node.type === "SaveImage" || node.type === "CR Image Output") { + // check if node has an 'images' array property + if (node.hasOwnProperty("images") && Array.isArray(node.images)) { + // iterate over the images array and add each image to the potential_outputs array + for (let j = 0; j < node.images.length; j++) { + potential_output_nodes.push(node); + potential_outputs.push({ "type": "image", "image": node.images[j], "title": node.title, "node_id": node.id }); + } + } + } + else if (node.type === "PreviewImage") { + // check if node has an 'images' array property + if (node.hasOwnProperty("images") && Array.isArray(node.images)) { + // iterate over the images array and add each image to the potential_outputs array + for (let j = 0; j < node.images.length; j++) { + potential_output_nodes.push(node); + potential_outputs.push({ "type": "image", "image": node.images[j], "title": node.title, "node_id": node.id }); + } + } + } + else if (node.type === "VHS_VideoCombine") { + // check if node has a 'widgets' array property, with type 'image' + if (node.hasOwnProperty("widgets") && Array.isArray(node.widgets)) { + // iterate over the widgets array and add each image to the potential_outputs array + for (let j = 0; j < node.widgets.length; j++) { + if (node.widgets[j].type === "image") { + const widgetValue = node.widgets[j].value; + const parsedURLVals = parseURLPath(widgetValue); + + // ensure that the parsedURLVals have 'filename', 'subfolder', 'type', and 'format' properties + if (parsedURLVals.hasOwnProperty("filename") && parsedURLVals.hasOwnProperty("subfolder") && parsedURLVals.hasOwnProperty("type") && parsedURLVals.hasOwnProperty("format")) { + if (parsedURLVals.type !== "output") { + // TODO + } + potential_output_nodes.push(node); + potential_outputs.push({ "type": "output", 'title': node.title, "node_id": node.id , "output": { "filename": parsedURLVals.filename, "subfolder": parsedURLVals.subfolder, "value": widgetValue, "format": parsedURLVals.format } }); + } + } else if (node.widgets[j].type === "preview") { + const widgetValue = node.widgets[j].value; + const parsedURLVals = widgetValue.params; + + if(!parsedURLVals.format?.startsWith('image')) { + // video isn't supported format + continue; + } + + // ensure that the parsedURLVals have 'filename', 'subfolder', 'type', and 'format' properties + if (parsedURLVals.hasOwnProperty("filename") && parsedURLVals.hasOwnProperty("subfolder") && parsedURLVals.hasOwnProperty("type") && parsedURLVals.hasOwnProperty("format")) { + if (parsedURLVals.type !== "output") { + // TODO + } + potential_output_nodes.push(node); + potential_outputs.push({ "type": "output", 'title': node.title, "node_id": node.id , "output": { "filename": parsedURLVals.filename, "subfolder": parsedURLVals.subfolder, "value": `/view?filename=${parsedURLVals.filename}&subfolder=${parsedURLVals.subfolder}&type=${parsedURLVals.type}&format=${parsedURLVals.format}`, "format": parsedURLVals.format } }); + } + } + } + } + } + else if (node.type === "ADE_AnimateDiffCombine") { + // check if node has a 'widgets' array property, with type 'image' + if (node.hasOwnProperty("widgets") && Array.isArray(node.widgets)) { + // iterate over the widgets array and add each image to the potential_outputs array + for (let j = 0; j < node.widgets.length; j++) { + if (node.widgets[j].type === "image") { + const widgetValue = node.widgets[j].value; + const parsedURLVals = parseURLPath(widgetValue); + // ensure that the parsedURLVals have 'filename', 'subfolder', 'type', and 'format' properties + if (parsedURLVals.hasOwnProperty("filename") && parsedURLVals.hasOwnProperty("subfolder") && parsedURLVals.hasOwnProperty("type") && parsedURLVals.hasOwnProperty("format")) { + if (parsedURLVals.type !== "output") { + // TODO + continue; + } + potential_output_nodes.push(node); + potential_outputs.push({ "type": "output", 'title': node.title, "output": { "filename": parsedURLVals.filename, "subfolder": parsedURLVals.subfolder, "type": parsedURLVals.type, "value": widgetValue, "format": parsedURLVals.format } }); + } + } + } + } + } + else if (node.type === "SaveAnimatedWEBP") { + // check if node has an 'images' array property + if (node.hasOwnProperty("images") && Array.isArray(node.images)) { + // iterate over the images array and add each image to the potential_outputs array + for (let j = 0; j < node.images.length; j++) { + potential_output_nodes.push(node); + potential_outputs.push({ "type": "image", "image": node.images[j], "title": node.title }); + } + } + } + } + + // Note: make sure that two arrays are the same length + return { potential_outputs, potential_output_nodes }; +} + + +export function parseURLPath(urlPath) { + // Extract the query string from the URL path + var queryString = urlPath.split('?')[1]; + + // Use the URLSearchParams API to parse the query string + var params = new URLSearchParams(queryString); + + // Create an object to store the parsed parameters + var parsedParams = {}; + + // Iterate over each parameter and add it to the object + for (var pair of params.entries()) { + parsedParams[pair[0]] = pair[1]; + } + + // Return the object with the parsed parameters + return parsedParams; +} + + +export const showOpenArtShareDialog = () => { + if (!OpenArtShareDialog.instance) { + OpenArtShareDialog.instance = new OpenArtShareDialog(); + } + + return app.graphToPrompt() + .then(prompt => { + // console.log({ prompt }) + return app.graph._nodes; + }) + .then(nodes => { + const { potential_outputs, potential_output_nodes } = getPotentialOutputsAndOutputNodes(nodes); + OpenArtShareDialog.instance.show({ potential_outputs, potential_output_nodes}); + }) +} + + +export const showYouMLShareDialog = () => { + if (!YouMLShareDialog.instance) { + YouMLShareDialog.instance = new YouMLShareDialog(); + } + + return app.graphToPrompt() + .then(prompt => { + return app.graph._nodes; + }) + .then(nodes => { + const { potential_outputs, potential_output_nodes } = getPotentialOutputsAndOutputNodes(nodes); + YouMLShareDialog.instance.show(potential_outputs, potential_output_nodes); + }) +} + + +export const showShareDialog = async (share_option) => { + if (!ShareDialog.instance) { + ShareDialog.instance = new ShareDialog(share_option); + } + return app.graphToPrompt() + .then(prompt => { + // console.log({ prompt }) + return app.graph._nodes; + }) + .then(nodes => { + // console.log({ nodes }); + const { potential_outputs, potential_output_nodes } = getPotentialOutputsAndOutputNodes(nodes); + if (potential_outputs.length === 0) { + if (potential_output_nodes.length === 0) { + // todo: add support for other output node types (animatediff combine, etc.) + const supported_nodes_string = SUPPORTED_OUTPUT_NODE_TYPES.join(", "); + alert(`No supported output node found (${supported_nodes_string}). To share this workflow, please add an output node to your graph and re-run your prompt.`); + } else { + alert("To share this, first run a prompt. Once it's done, click 'Share'.\n\nNOTE: Images of the Share target can only be selected in the PreviewImage, SaveImage, and VHS_VideoCombine nodes. In the case of VHS_VideoCombine, only the image/gif and image/webp formats are supported."); + } + return false; + } + ShareDialog.instance.show({ potential_outputs, potential_output_nodes, share_option }); + return true; + }); +} + +export class ShareDialogChooser extends ComfyDialog { + static instance = null; + constructor() { + super(); + this.element = $el("div.comfy-modal", { + parent: document.body, style: { + 'overflow-y': "auto", + } + }, + [$el("div.comfy-modal-content", + {}, + [...this.createButtons()]), + ]); + this.selectedNodeId = null; + } + createButtons() { + const buttons = [ + { + key: "openart", + textContent: "OpenArt AI", + website: "https://openart.ai/workflows/", + description: "Share ComfyUI workflows and art on OpenArt.ai", + onclick: () => { + showOpenArtShareDialog(); + this.close(); + } + }, + { + key: "youml", + textContent: "YouML", + website: "https://youml.com", + description: "Share your workflow or transform it into an interactive app on YouML.com", + onclick: () => { + showYouMLShareDialog(); + this.close(); + } + }, + { + key: "matrix", + textContent: "Matrix Server", + website: "https://app.element.io/#/room/%23comfyui_space%3Amatrix.org", + description: "Share your art on the official ComfyUI matrix server", + onclick: async () => { + showShareDialog('matrix').then((suc) => { + suc && this.close(); + }) + } + }, + { + key: "comfyworkflows", + textContent: "ComfyWorkflows", + website: "https://comfyworkflows.com", + description: "Share & browse thousands of ComfyUI workflows and art 🎨

ComfyWorkflows.com", + onclick: () => { + showShareDialog('comfyworkflows').then((suc) => { + suc && this.close(); + }) + } + }, + ]; + + function createShareButtonsWithDescriptions() { + // Responsive container + const container = $el("div", { + style: { + display: "flex", + 'flex-wrap': 'wrap', + 'justify-content': 'space-around', + 'padding': '10px', + } + }); + + buttons.forEach(b => { + const button = $el("button", { + type: "button", + textContent: b.textContent, + onclick: b.onclick, + style: { + 'width': '25%', + 'minWidth': '200px', + 'background-color': b.backgroundColor || '', + 'border-radius': '5px', + 'cursor': 'pointer', + 'padding': '5px 5px', + 'margin-bottom': '5px', + 'transition': 'background-color 0.3s', + } + }); + button.addEventListener('mouseover', () => { + button.style.backgroundColor = '#007BFF'; // Change color on hover + }); + button.addEventListener('mouseout', () => { + button.style.backgroundColor = b.backgroundColor || ''; + }); + + const description = $el("p", { + innerHTML: b.description, + style: { + 'text-align': 'left', + color: 'white', + 'font-size': '14px', + 'margin-bottom': '0', + }, + }); + + const websiteLink = $el("a", { + textContent: "🌐 Website", + href: b.website, + target: "_blank", + style: { + color: 'white', + 'margin-left': '10px', + 'font-size': '12px', + 'text-decoration': 'none', + 'align-self': 'center', + }, + }); + + // Add highlight to the website link + websiteLink.addEventListener('mouseover', () => { + websiteLink.style.opacity = '0.7'; + }); + + websiteLink.addEventListener('mouseout', () => { + websiteLink.style.opacity = '1'; + }); + + const buttonLinkContainer = $el("div", { + style: { + display: 'flex', + 'align-items': 'center', + 'margin-bottom': '10px', + } + }, [button, websiteLink]); + + const column = $el("div", { + style: { + 'flex-basis': '100%', + 'margin': '10px', + 'padding': '10px 20px', + 'border': '1px solid #ddd', + 'border-radius': '5px', + 'box-shadow': '0 2px 4px rgba(0, 0, 0, 0.1)', + } + }, [buttonLinkContainer, description]); + + container.appendChild(column); + }); + + return container; + } + + return [ + $el("p", { + textContent: 'Choose a platform to share your workflow', + style: { + 'text-align': 'center', + 'color': 'white', + 'font-size': '18px', + 'margin-bottom': '10px', + }, + } + ), + + $el("div.cm-menu-container", { + id: "comfyui-share-container" + }, [ + $el("div.cm-menu-column", [ + createShareButtonsWithDescriptions(), + $el("br", {}, []), + ]), + ]), + $el("div.cm-menu-container", { + id: "comfyui-share-container" + }, [ + $el("button", { + type: "button", + style: { + margin: "0 25px", + width: "100%", + }, + textContent: "Close", + onclick: () => { + this.close() + } + }), + $el("br", {}, []), + ]), + ]; + } + show() { + this.element.style.display = "block"; + this.element.style.zIndex = 10001; + } +} +export class ShareDialog extends ComfyDialog { + static instance = null; + static matrix_auth = { homeserver: "matrix.org", username: "", password: "" }; + static cw_sharekey = ""; + + constructor(share_option) { + super(); + this.share_option = share_option; + this.element = $el("div.comfy-modal", { + parent: document.body, style: { + 'overflow-y': "auto", + } + }, + [$el("div.comfy-modal-content", + {}, + [...this.createButtons()]), + ]); + this.selectedOutputIndex = 0; + } + + createButtons() { + this.radio_buttons = $el("div", { + id: "selectOutputImages", + }, []); + + this.is_nsfw_checkbox = $el("input", { type: 'checkbox', id: "is_nsfw" }, []) + const is_nsfw_checkbox_text = $el("label", { + }, [" Is this NSFW?"]) + this.is_nsfw_checkbox.style.color = "var(--fg-color)"; + this.is_nsfw_checkbox.checked = false; + + this.matrix_destination_checkbox = $el("input", { type: 'checkbox', id: "matrix_destination" }, []) + const matrix_destination_checkbox_text = $el("label", {}, [" ComfyUI Matrix server"]) + this.matrix_destination_checkbox.style.color = "var(--fg-color)"; + this.matrix_destination_checkbox.checked = this.share_option === 'matrix'; //true; + + this.comfyworkflows_destination_checkbox = $el("input", { type: 'checkbox', id: "comfyworkflows_destination" }, []) + const comfyworkflows_destination_checkbox_text = $el("label", {}, [" ComfyWorkflows.com"]) + this.comfyworkflows_destination_checkbox.style.color = "var(--fg-color)"; + this.comfyworkflows_destination_checkbox.checked = this.share_option !== 'matrix'; + + this.matrix_homeserver_input = $el("input", { type: 'text', id: "matrix_homeserver", placeholder: "matrix.org", value: ShareDialog.matrix_auth.homeserver || 'matrix.org' }, []); + this.matrix_username_input = $el("input", { type: 'text', placeholder: "Username", value: ShareDialog.matrix_auth.username || '' }, []); + this.matrix_password_input = $el("input", { type: 'password', placeholder: "Password", value: ShareDialog.matrix_auth.password || '' }, []); + + this.cw_sharekey_input = $el("input", { type: 'text', placeholder: "Share key (found on your profile page)", value: ShareDialog.cw_sharekey || '' }, []); + this.cw_sharekey_input.style.width = "100%"; + + this.credits_input = $el("input", { + type: "text", + placeholder: "This will be used to give credits", + required: false, + }, []); + + this.title_input = $el("input", { + type: "text", + placeholder: "ex: My awesome art", + required: false + }, []); + + this.description_input = $el("textarea", { + placeholder: "ex: Trying out a new workflow... ", + required: false, + }, []); + + this.share_button = $el("button", { + type: "submit", + textContent: "Share", + style: { + backgroundColor: "blue" + } + }, []); + + this.final_message = $el("div", { + style: { + color: "white", + textAlign: "center", + // marginTop: "10px", + // backgroundColor: "black", + padding: "10px", + } + }, []); + + this.share_finalmessage_container = $el("div.cm-menu-container", { + id: "comfyui-share-finalmessage-container", + style: { + display: "none", + } + }, [ + $el("div.cm-menu-column", [ + this.final_message, + $el("button", { + type: "button", + textContent: "Close", + onclick: () => { + // Reset state + this.matrix_destination_checkbox.checked = this.share_option === 'matrix'; + this.comfyworkflows_destination_checkbox.checked = this.share_option !== 'matrix'; + this.share_button.textContent = "Share"; + this.share_button.style.display = "inline-block"; + this.final_message.innerHTML = ""; + this.final_message.style.color = "white"; + this.credits_input.value = ""; + this.title_input.value = ""; + this.description_input.value = ""; + this.is_nsfw_checkbox.checked = false; + this.selectedOutputIndex = 0; + + // hide the final message + this.share_finalmessage_container.style.display = "none"; + + // show the share container + this.share_container.style.display = "flex"; + + this.close() + } + }), + ]) + ]); + this.share_container = $el("div.cm-menu-container", { + id: "comfyui-share-container" + }, [ + $el("div.cm-menu-column", [ + $el("details", { + style: { + border: "1px solid #999", + padding: "5px", + borderRadius: "5px", + backgroundColor: "#222" + } + }, [ + $el("summary", { + style: { + color: "white", + cursor: "pointer", + } + }, [`Matrix account`]), + $el("div", { + style: { + display: "flex", + flexDirection: "row", + } + }, [ + $el("div", { + textContent: "Homeserver", + style: { + marginRight: "10px", + } + }, []), + this.matrix_homeserver_input, + ]), + + $el("div", { + style: { + display: "flex", + flexDirection: "row", + } + }, [ + $el("div", { + textContent: "Username", + style: { + marginRight: "10px", + } + }, []), + this.matrix_username_input, + ]), + + $el("div", { + style: { + display: "flex", + flexDirection: "row", + } + }, [ + $el("div", { + textContent: "Password", + style: { + marginRight: "10px", + } + }, []), + this.matrix_password_input, + ]), + + ]), + $el("details", { + style: { + border: "1px solid #999", + marginTop: "10px", + padding: "5px", + borderRadius: "5px", + backgroundColor: "#222" + }, + }, [ + $el("summary", { + style: { + color: "white", + cursor: "pointer", + } + }, [`Comfyworkflows.com account`]), + $el("h4", { + textContent: "Share key (found on your profile page)", + }, []), + $el("p", { size: 3, color: "white" }, ["If provided, your art will be saved to your account. Otherwise, it will be shared anonymously."]), + this.cw_sharekey_input, + ]), + + $el("div", {}, [ + $el("p", { + size: 3, color: "white", style: { + color: 'white' + } + }, [`Select where to share your art:`]), + this.matrix_destination_checkbox, + matrix_destination_checkbox_text, + $el("br", {}, []), + this.comfyworkflows_destination_checkbox, + comfyworkflows_destination_checkbox_text, + ]), + + $el("h4", { + textContent: "Credits (optional)", + size: 3, + color: "white", + style: { + color: 'white' + } + }, []), + this.credits_input, + // $el("br", {}, []), + + $el("h4", { + textContent: "Title (optional)", + size: 3, + color: "white", + style: { + color: 'white' + } + }, []), + this.title_input, + // $el("br", {}, []), + + $el("h4", { + textContent: "Description (optional)", + size: 3, + color: "white", + style: { + color: 'white' + } + }, []), + this.description_input, + $el("br", {}, []), + + $el("div", {}, [this.is_nsfw_checkbox, is_nsfw_checkbox_text]), + // $el("br", {}, []), + + // this.final_message, + // $el("br", {}, []), + ]), + $el("div.cm-menu-column", [ + this.radio_buttons, + $el("br", {}, []), + + this.share_button, + + $el("button", { + type: "button", + textContent: "Close", + onclick: () => { + // Reset state + this.matrix_destination_checkbox.checked = this.share_option === 'matrix'; + this.comfyworkflows_destination_checkbox.checked = this.share_option !== 'matrix'; + this.share_button.textContent = "Share"; + this.share_button.style.display = "inline-block"; + this.final_message.innerHTML = ""; + this.final_message.style.color = "white"; + this.credits_input.value = ""; + this.title_input.value = ""; + this.description_input.value = ""; + this.is_nsfw_checkbox.checked = false; + this.selectedOutputIndex = 0; + + // hide the final message + this.share_finalmessage_container.style.display = "none"; + + // show the share container + this.share_container.style.display = "flex"; + + this.close() + } + }), + $el("br", {}, []), + ]), + ]); + + // get the user's existing matrix auth and share key + ShareDialog.matrix_auth = { homeserver: "matrix.org", username: "", password: "" }; + try { + api.fetchApi(`/manager/get_matrix_auth`) + .then(response => response.json()) + .then(data => { + ShareDialog.matrix_auth = data; + this.matrix_homeserver_input.value = ShareDialog.matrix_auth.homeserver; + this.matrix_username_input.value = ShareDialog.matrix_auth.username; + this.matrix_password_input.value = ShareDialog.matrix_auth.password; + }) + .catch(error => { + // console.log(error); + }); + } catch (error) { + // console.log(error); + } + + // get the user's existing comfyworkflows share key + ShareDialog.cw_sharekey = ""; + try { + // console.log("Fetching comfyworkflows share key") + api.fetchApi(`/manager/get_comfyworkflows_auth`) + .then(response => response.json()) + .then(data => { + ShareDialog.cw_sharekey = data.comfyworkflows_sharekey; + this.cw_sharekey_input.value = ShareDialog.cw_sharekey; + }) + .catch(error => { + // console.log(error); + }); + } catch (error) { + // console.log(error); + } + + this.share_button.onclick = async () => { + const prompt = await app.graphToPrompt(); + const nodes = app.graph._nodes; + + // console.log({ prompt, nodes }); + + const destinations = []; + if (this.matrix_destination_checkbox.checked) { + destinations.push("matrix"); + } + if (this.comfyworkflows_destination_checkbox.checked) { + destinations.push("comfyworkflows"); + } + + // if destinations includes matrix, make an api call to /manager/check_matrix to ensure that the user has configured their matrix settings + if (destinations.includes("matrix")) { + let definedMatrixAuth = !!this.matrix_homeserver_input.value && !!this.matrix_username_input.value && !!this.matrix_password_input.value; + if (!definedMatrixAuth) { + alert("Please set your Matrix account details."); + return; + } + } + + if (destinations.includes("comfyworkflows") && !this.cw_sharekey_input.value && false) { //!confirm("You have NOT set your ComfyWorkflows.com share key. Your art will NOT be connected to your account (it will be shared anonymously). Continue?")) { + return; + } + + const { potential_outputs, potential_output_nodes } = getPotentialOutputsAndOutputNodes(nodes); + + // console.log({ potential_outputs, potential_output_nodes }) + + if (potential_outputs.length === 0) { + if (potential_output_nodes.length === 0) { + // todo: add support for other output node types (animatediff combine, etc.) + const supported_nodes_string = SUPPORTED_OUTPUT_NODE_TYPES.join(", "); + alert(`No supported output node found (${supported_nodes_string}). To share this workflow, please add an output node to your graph and re-run your prompt.`); + } else { + alert("To share this, first run a prompt. Once it's done, click 'Share'.\n\nNOTE: Images of the Share target can only be selected in the PreviewImage, SaveImage, and VHS_VideoCombine nodes. In the case of VHS_VideoCombine, only the image/gif and image/webp formats are supported."); + } + this.selectedOutputIndex = 0; + this.close(); + return; + } + + // Change the text of the share button to "Sharing..." to indicate that the share process has started + this.share_button.textContent = "Sharing..."; + + const response = await api.fetchApi(`/manager/share`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + matrix_auth: { + homeserver: this.matrix_homeserver_input.value, + username: this.matrix_username_input.value, + password: this.matrix_password_input.value, + }, + cw_auth: { + cw_sharekey: this.cw_sharekey_input.value, + }, + share_destinations: destinations, + credits: this.credits_input.value, + title: this.title_input.value, + description: this.description_input.value, + is_nsfw: this.is_nsfw_checkbox.checked, + prompt, + potential_outputs, + selected_output_index: this.selectedOutputIndex, + // potential_output_nodes + }) + }); + + if (response.status != 200) { + try { + const response_json = await response.json(); + if (response_json.error) { + alert(response_json.error); + this.close(); + return; + } else { + alert("Failed to share your art. Please try again."); + this.close(); + return; + } + } catch (e) { + alert("Failed to share your art. Please try again."); + this.close(); + return; + } + } + + const response_json = await response.json(); + + if (response_json.comfyworkflows.url) { + this.final_message.innerHTML = "Your art has been shared: " + response_json.comfyworkflows.url + ""; + if (response_json.matrix.success) { + this.final_message.innerHTML += "
Your art has been shared in the ComfyUI Matrix server's #share channel!"; + } + } else { + if (response_json.matrix.success) { + this.final_message.innerHTML = "Your art has been shared in the ComfyUI Matrix server's #share channel!"; + } + } + + this.final_message.style.color = "green"; + + // hide #comfyui-share-container and show #comfyui-share-finalmessage-container + this.share_container.style.display = "none"; + this.share_finalmessage_container.style.display = "block"; + + // hide the share button + this.share_button.textContent = "Shared!"; + this.share_button.style.display = "none"; + // this.close(); + } + + const res = + [ + $el("tr.td", { width: "100%" }, [ + $el("font", { size: 6, color: "white" }, [`Share your art`]), + ]), + $el("br", {}, []), + + this.share_finalmessage_container, + this.share_container, + ]; + + res[0].style.padding = "10px 10px 10px 10px"; + res[0].style.backgroundColor = "black"; //"linear-gradient(90deg, #00C9FF 0%, #92FE9D 100%)"; + res[0].style.textAlign = "center"; + res[0].style.height = "45px"; + return res; + } + + show({potential_outputs, potential_output_nodes, share_option}) { + // Sort `potential_output_nodes` by node ID to make the order always + // consistent, but we should also keep `potential_outputs` in the same + // order as `potential_output_nodes`. + const potential_output_to_order = {}; + potential_output_nodes.forEach((node, index) => { + if (node.id in potential_output_to_order) { + potential_output_to_order[node.id][1].push(potential_outputs[index]); + } else { + potential_output_to_order[node.id] = [node, [potential_outputs[index]]]; + } + }) + // Sort the object `potential_output_to_order` by key (node ID) + const sorted_potential_output_to_order = Object.fromEntries( + Object.entries(potential_output_to_order).sort((a, b) => a[0].id - b[0].id) + ); + const sorted_potential_outputs = [] + const sorted_potential_output_nodes = [] + for (const [key, value] of Object.entries(sorted_potential_output_to_order)) { + sorted_potential_output_nodes.push(value[0]); + sorted_potential_outputs.push(...value[1]); + } + potential_output_nodes = sorted_potential_output_nodes; + potential_outputs = sorted_potential_outputs; + + // console.log({ potential_outputs, potential_output_nodes }) + this.radio_buttons.innerHTML = ""; // clear the radio buttons + let is_radio_button_checked = false; // only check the first radio button if multiple images from the same node + const new_radio_buttons = $el("div", { + id: "selectOutput-Options", + style: { + 'overflow-y': 'scroll', + 'max-height': '400px', + } + }, potential_outputs.map((output, index) => { + const {node_id} = output; + const radio_button = $el("input", { type: 'radio', name: "selectOutputImages", value: index, required: index === 0 }, []) + let radio_button_img; + if (output.type === "image" || output.type === "temp") { + radio_button_img = $el("img", { src: `/view?filename=${output.image.filename}&subfolder=${output.image.subfolder}&type=${output.image.type}`, style: { width: "auto", height: "100px" } }, []); + } else if (output.type === "output") { + radio_button_img = $el("img", { src: output.output.value, style: { width: "auto", height: "100px" } }, []); + } else { + // unsupported output type + // this should never happen + // TODO + radio_button_img = $el("img", { src: "", style: { width: "auto", height: "100px" } }, []); + } + const radio_button_text = $el("label", { + // style: { + // color: 'white' + // } + }, [output.title]) + radio_button.style.color = "var(--fg-color)"; + + // Make the radio button checked if it's the selected node, + // otherwise make the first radio button checked. + if (this.selectedNodeId) { + if (this.selectedNodeId === node_id && !is_radio_button_checked) { + radio_button.checked = true; + is_radio_button_checked = true; + } + } else { + radio_button.checked = index === 0; + } + + if (radio_button.checked) { + this.selectedOutputIndex = index; + } + + radio_button.onchange = () => { + this.selectedOutputIndex = parseInt(radio_button.value); + }; + + return $el("div", { + style: { + display: "flex", + 'align-items': 'center', + 'justify-content': 'space-between', + 'margin-bottom': '10px', + } + }, [radio_button, radio_button_text, radio_button_img]); + })); + const header = $el("h3", { + textContent: "Select an image to share", + size: 3, + color: "white", + style: { + 'text-align': 'center', + color: 'white', + backgroundColor: 'black', + padding: '10px', + 'margin-top': '0px', + } + }, [ + $el("p", { + textContent: "Scroll to see all outputs", + size: 2, + color: "white", + style: { + 'text-align': 'center', + color: 'white', + 'margin-bottom': '5px', + 'font-style': 'italic', + 'font-size': '12px', + }, + }, []) + ]); + this.radio_buttons.appendChild(header); + // this.radio_buttons.appendChild(subheader); + this.radio_buttons.appendChild(new_radio_buttons); + this.element.style.display = "block"; + + share_option = share_option || this.share_option; + if (share_option === 'comfyworkflows') { + this.matrix_destination_checkbox.checked = false; + this.comfyworkflows_destination_checkbox.checked = true; + } else { + this.matrix_destination_checkbox.checked = true; + this.comfyworkflows_destination_checkbox.checked = false; + } + } +} diff --git a/custom_nodes/ComfyUI-Manager/js/comfyui-share-openart.js b/custom_nodes/ComfyUI-Manager/js/comfyui-share-openart.js new file mode 100644 index 0000000000000000000000000000000000000000..157709077a1c0185841efe420dcfd46ecf0d6a9e --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/comfyui-share-openart.js @@ -0,0 +1,749 @@ +import {app} from "../../scripts/app.js"; +import {api} from "../../scripts/api.js"; +import {ComfyDialog, $el} from "../../scripts/ui.js"; + +const LOCAL_STORAGE_KEY = "openart_comfy_workflow_key"; +const DEFAULT_HOMEPAGE_URL = "https://openart.ai/workflows/dev?developer=true"; +//const DEFAULT_HOMEPAGE_URL = "http://localhost:8080/workflows/dev?developer=true"; + +const API_ENDPOINT = "https://openart.ai/api"; +//const API_ENDPOINT = "http://localhost:8080/api"; + +const style = ` + .openart-share-dialog a { + color: #f8f8f8; + } + .openart-share-dialog a:hover { + color: #007bff; + } + .output_label { + border: 5px solid transparent; + } + .output_label:hover { + border: 5px solid #59E8C6; + } + .output_label.checked { + border: 5px solid #59E8C6; + } +`; + +// Shared component styles +const sectionStyle = { + marginBottom: 0, + padding: 0, + borderRadius: "8px", + boxShadow: "0 2px 4px rgba(0, 0, 0, 0.05)", + display: "flex", + flexDirection: "column", + justifyContent: "center", +}; + +export class OpenArtShareDialog extends ComfyDialog { + static instance = null; + + constructor() { + super(); + $el("style", { + textContent: style, + parent: document.head, + }); + this.element = $el( + "div.comfy-modal.openart-share-dialog", + { + parent: document.body, + style: { + "overflow-y": "auto", + }, + }, + [$el("div.comfy-modal-content", {}, [...this.createButtons()])] + ); + this.selectedOutputIndex = 0; + this.selectedNodeId = null; + this.uploadedImages = []; + this.selectedFile = null; + } + + async readKey() { + let key = "" + try { + key = await api.fetchApi(`/manager/get_openart_auth`) + .then(response => response.json()) + .then(data => { + return data.openart_key; + }) + .catch(error => { + // console.log(error); + }); + } catch (error) { + // console.log(error); + } + return key || ""; + } + + async saveKey(value) { + await api.fetchApi(`/manager/set_openart_auth`, { + method: 'POST', + headers: {'Content-Type': 'application/json'}, + body: JSON.stringify({ + openart_key: value + }) + }); + } + + createButtons() { + const inputStyle = { + display: "block", + minWidth: "500px", + width: "100%", + padding: "10px", + margin: "10px 0", + borderRadius: "4px", + border: "1px solid #ddd", + boxSizing: "border-box", + }; + + const hyperLinkStyle = { + display: "block", + marginBottom: "15px", + fontWeight: "bold", + fontSize: "14px", + }; + + const labelStyle = { + color: "#f8f8f8", + display: "block", + margin: "10px 0 0 0", + fontWeight: "bold", + textDecoration: "none", + }; + + const buttonStyle = { + padding: "10px 80px", + margin: "10px 5px", + borderRadius: "4px", + border: "none", + cursor: "pointer", + color: "#fff", + backgroundColor: "#007bff", + }; + + // upload images input + this.uploadImagesInput = $el("input", { + type: "file", + multiple: false, + style: inputStyle, + accept: "image/*", + }); + + this.uploadImagesInput.addEventListener("change", async (e) => { + const file = e.target.files[0]; + if (!file) { + this.previewImage.src = ""; + this.previewImage.style.display = "none"; + return; + } + const reader = new FileReader(); + reader.onload = async (e) => { + const imgData = e.target.result; + this.previewImage.src = imgData; + this.previewImage.style.display = "block"; + this.selectedFile = null + // Once user uploads an image, we uncheck all radio buttons + this.radioButtons.forEach((ele) => { + ele.checked = false; + ele.parentElement.classList.remove("checked"); + }); + + // Add the opacity style toggle here to indicate that they only need + // to upload one image or choose one from the outputs. + this.outputsSection.style.opacity = 0.35; + this.uploadImagesInput.style.opacity = 1; + }; + reader.readAsDataURL(file); + }); + + // preview image + this.previewImage = $el("img", { + src: "", + style: { + width: "100%", + maxHeight: "100px", + objectFit: "contain", + display: "none", + marginTop: '10px', + }, + }); + + this.keyInput = $el("input", { + type: "password", + placeholder: "Copy & paste your API key", + style: inputStyle, + }); + this.NameInput = $el("input", { + type: "text", + placeholder: "Title (required)", + style: inputStyle, + }); + this.descriptionInput = $el("textarea", { + placeholder: "Description (optional)", + style: { + ...inputStyle, + minHeight: "100px", + }, + }); + + // Header Section + const headerSection = $el("h3", { + textContent: "Share your workflow to OpenArt", + size: 3, + color: "white", + style: { + 'text-align': 'center', + color: 'white', + margin: '0 0 10px 0', + } + }); + + // LinkSection + this.communityLink = $el("a", { + style: hyperLinkStyle, + href: DEFAULT_HOMEPAGE_URL, + target: "_blank" + }, ["👉 Check out thousands of workflows shared from the community"]) + this.getAPIKeyLink = $el("a", { + style: { + ...hyperLinkStyle, + color: "#59E8C6" + }, + href: DEFAULT_HOMEPAGE_URL, + target: "_blank" + }, ["👉 Get your API key here"]) + const linkSection = $el( + "div", + { + style: { + marginTop: "10px", + display: "flex", + flexDirection: "column", + }, + }, + [ + this.communityLink, + this.getAPIKeyLink, + ] + ); + + // Account Section + const accountSection = $el("div", {style: sectionStyle}, [ + $el("label", {style: labelStyle}, ["1️⃣ OpenArt API Key"]), + this.keyInput, + ]); + + // Output Upload Section + const outputUploadSection = $el("div", {style: sectionStyle}, [ + $el("label", { + style: { + ...labelStyle, + margin: "10px 0 0 0" + } + }, ["2️⃣ Image/Thumbnail (Required)"]), + this.previewImage, + this.uploadImagesInput, + ]); + + // Outputs Section + this.outputsSection = $el("div", { + id: "selectOutputs", + }, []); + + // Additional Inputs Section + const additionalInputsSection = $el("div", {style: sectionStyle}, [ + $el("label", {style: labelStyle}, ["3️⃣ Workflow Information"]), + this.NameInput, + this.descriptionInput, + ]); + + // OpenArt Contest Section + /* + this.joinContestCheckbox = $el("input", { + type: 'checkbox', + id: "join_contest"s + }, []) + this.joinContestDescription = $el("a", { + style: { + ...hyperLinkStyle, + display: 'inline-block', + color: "#59E8C6", + fontSize: '12px', + marginLeft: '10px', + marginBottom: 0, + }, + href: "https://contest.openart.ai/", + target: "_blank" + }, ["🏆 I'm participating in the OpenArt workflow contest"]) + this.joinContestLabel = $el("label", { + style: { + display: 'flex', + alignItems: 'center', + cursor: 'pointer', + } + }, [this.joinContestCheckbox, this.joinContestDescription]) + const contestSection = $el("div", {style: sectionStyle}, [ + this.joinContestLabel, + ]); + */ + + // Message Section + this.message = $el( + "div", + { + style: { + color: "#ff3d00", + textAlign: "center", + padding: "10px", + fontSize: "20px", + }, + }, + [] + ); + + this.shareButton = $el("button", { + type: "submit", + textContent: "Share", + style: buttonStyle, + onclick: () => { + this.handleShareButtonClick(); + }, + }); + + // Share and Close Buttons + const buttonsSection = $el( + "div", + { + style: { + textAlign: "right", + marginTop: "20px", + display: "flex", + justifyContent: "space-between", + }, + }, + [ + $el("button", { + type: "button", + textContent: "Close", + style: { + ...buttonStyle, + backgroundColor: undefined, + }, + onclick: () => { + this.close(); + }, + }), + this.shareButton, + ] + ); + + // Composing the full layout + const layout = [ + headerSection, + linkSection, + accountSection, + outputUploadSection, + this.outputsSection, + additionalInputsSection, + // contestSection, + this.message, + buttonsSection, + ]; + + return layout; + } + + async fetchApi(path, options, statusText) { + if (statusText) { + this.message.textContent = statusText; + } + const addSearchParams = (url, params = {}) => + new URL( + `${url.origin}${url.pathname}?${new URLSearchParams([ + ...Array.from(url.searchParams.entries()), + ...Object.entries(params), + ])}` + ); + + const fullPath = addSearchParams(new URL(API_ENDPOINT + path), { + workflow_api_key: this.keyInput.value, + }); + + const response = await fetch(fullPath, options); + + if (!response.ok) { + throw new Error(response.statusText); + } + + if (statusText) { + this.message.textContent = ""; + } + const data = await response.json(); + return { + ok: response.ok, + statusText: response.statusText, + status: response.status, + data, + }; + } + + async uploadThumbnail(uploadFile) { + const form = new FormData(); + form.append("file", uploadFile); + try { + const res = await this.fetchApi( + `/workflows/upload_thumbnail`, + { + method: "POST", + body: form, + }, + "Uploading thumbnail..." + ); + + if (res.ok && res.data) { + const {image_url, width, height} = res.data; + this.uploadedImages.push({ + url: image_url, + width, + height, + }); + } + } catch (e) { + if (e?.response?.status === 413) { + throw new Error("File size is too large (max 20MB)"); + } else { + throw new Error("Error uploading thumbnail: " + e.message); + } + } + } + + async handleShareButtonClick() { + this.message.textContent = ""; + await this.saveKey(this.keyInput.value); + try { + this.shareButton.disabled = true; + this.shareButton.textContent = "Sharing..."; + await this.share(); + } catch (e) { + alert(e.message); + } + this.shareButton.disabled = false; + this.shareButton.textContent = "Share"; + } + + async share() { + const prompt = await app.graphToPrompt(); + const workflowJSON = prompt["workflow"]; + const workflowAPIJSON = prompt["output"]; + const form_values = { + name: this.NameInput.value, + description: this.descriptionInput.value, + }; + + if (!this.keyInput.value) { + throw new Error("API key is required"); + } + + if (!this.uploadImagesInput.files[0] && !this.selectedFile) { + throw new Error("Thumbnail is required"); + } + + if (!form_values.name) { + throw new Error("Title is required"); + } + + const current_snapshot = await api.fetchApi(`/snapshot/get_current`) + .then(response => response.json()) + .catch(error => { + // console.log(error); + }); + + + if (!this.uploadedImages.length) { + if (this.selectedFile) { + await this.uploadThumbnail(this.selectedFile); + } else { + for (const file of this.uploadImagesInput.files) { + try { + await this.uploadThumbnail(file); + } catch (e) { + this.uploadedImages = []; + throw new Error(e.message); + } + } + + if (this.uploadImagesInput.files.length === 0) { + throw new Error("No thumbnail uploaded"); + } + + if (this.uploadImagesInput.files.length === 0) { + throw new Error("No thumbnail uploaded"); + } + } + } + + // const join_contest = this.joinContestCheckbox.checked; + + try { + const response = await this.fetchApi( + "/workflows/publish", + { + method: "POST", + headers: {"Content-Type": "application/json"}, + body: JSON.stringify({ + workflow_json: workflowJSON, + upload_images: this.uploadedImages, + form_values, + advanced_config: { + workflow_api_json: workflowAPIJSON, + snapshot: current_snapshot, + }, + // join_contest, + }), + }, + "Uploading workflow..." + ); + + if (response.ok) { + const {workflow_id} = response.data; + if (workflow_id) { + const url = `https://openart.ai/workflows/-/-/${workflow_id}`; + this.message.innerHTML = `Workflow has been shared successfully. Click here to view it.`; + this.previewImage.src = ""; + this.previewImage.style.display = "none"; + this.uploadedImages = []; + this.NameInput.value = ""; + this.descriptionInput.value = ""; + this.radioButtons.forEach((ele) => { + ele.checked = false; + ele.parentElement.classList.remove("checked"); + }); + this.selectedOutputIndex = 0; + this.selectedNodeId = null; + this.selectedFile = null; + } + } + } catch (e) { + throw new Error("Error sharing workflow: " + e.message); + } + } + + async fetchImageBlob(url) { + const response = await fetch(url); + const blob = await response.blob(); + return blob; + } + + async show({potential_outputs, potential_output_nodes} = {}) { + // Sort `potential_output_nodes` by node ID to make the order always + // consistent, but we should also keep `potential_outputs` in the same + // order as `potential_output_nodes`. + const potential_output_to_order = {}; + potential_output_nodes.forEach((node, index) => { + if (node.id in potential_output_to_order) { + potential_output_to_order[node.id][1].push(potential_outputs[index]); + } else { + potential_output_to_order[node.id] = [node, [potential_outputs[index]]]; + } + }) + // Sort the object `potential_output_to_order` by key (node ID) + const sorted_potential_output_to_order = Object.fromEntries( + Object.entries(potential_output_to_order).sort((a, b) => a[0].id - b[0].id) + ); + const sorted_potential_outputs = [] + const sorted_potential_output_nodes = [] + for (const [key, value] of Object.entries(sorted_potential_output_to_order)) { + sorted_potential_output_nodes.push(value[0]); + sorted_potential_outputs.push(...value[1]); + } + potential_output_nodes = sorted_potential_output_nodes; + potential_outputs = sorted_potential_outputs; + + this.message.innerHTML = ""; + this.message.textContent = ""; + this.element.style.display = "block"; + this.previewImage.src = ""; + this.previewImage.style.display = "none"; + const key = await this.readKey(); + this.keyInput.value = key; + this.uploadedImages = []; + + // If `selectedNodeId` is provided, we will select the corresponding radio + // button for the node. In addition, we move the selected radio button to + // the top of the list. + if (this.selectedNodeId) { + const index = potential_output_nodes.findIndex(node => node.id === this.selectedNodeId); + if (index >= 0) { + this.selectedOutputIndex = index; + } + } + + this.radioButtons = []; + const new_radio_buttons = $el("div", + { + id: "selectOutput-Options", + style: { + 'overflow-y': 'scroll', + 'max-height': '200px', + + 'display': 'grid', + 'grid-template-columns': 'repeat(auto-fit, minmax(100px, 1fr))', + 'grid-template-rows': 'auto', + 'grid-column-gap': '10px', + 'grid-row-gap': '10px', + 'margin-bottom': '10px', + 'padding': '10px', + 'border-radius': '8px', + 'box-shadow': '0 2px 4px rgba(0, 0, 0, 0.05)', + 'background-color': 'var(--bg-color)', + } + }, + potential_outputs.map((output, index) => { + const {node_id} = output; + const radio_button = $el("input", { + type: 'radio', + name: "selectOutputImages", + value: index, + required: index === 0 + }, []) + let radio_button_img; + let filename; + if (output.type === "image" || output.type === "temp") { + radio_button_img = $el("img", { + src: `/view?filename=${output.image.filename}&subfolder=${output.image.subfolder}&type=${output.image.type}`, + style: { + width: "100px", + height: "100px", + objectFit: "cover", + borderRadius: "5px" + } + }, []); + filename = output.image.filename + } else if (output.type === "output") { + radio_button_img = $el("img", { + src: output.output.value, + style: { + width: "auto", + height: "100px", + objectFit: "cover", + borderRadius: "5px" + } + }, []); + filename = output.filename + } else { + // unsupported output type + // this should never happen + // TODO + radio_button_img = $el("img", { + src: "", + style: {width: "auto", height: "100px"} + }, []); + } + const radio_button_text = $el("span", { + style: { + color: 'gray', + display: 'block', + fontSize: '12px', + overflowX: 'hidden', + textOverflow: 'ellipsis', + textWrap: 'nowrap', + maxWidth: '100px', + } + }, [output.title]) + const node_id_chip = $el("span", { + style: { + color: '#FBFBFD', + display: 'block', + backgroundColor: 'rgba(0, 0, 0, 0.5)', + fontSize: '12px', + overflowX: 'hidden', + padding: '2px 3px', + textOverflow: 'ellipsis', + textWrap: 'nowrap', + maxWidth: '100px', + position: 'absolute', + top: '3px', + left: '3px', + borderRadius: '3px', + } + }, [`Node: ${node_id}`]) + radio_button.style.color = "var(--fg-color)"; + radio_button.checked = this.selectedOutputIndex === index; + + radio_button.onchange = async () => { + this.selectedOutputIndex = parseInt(radio_button.value); + + // Remove the "checked" class from all radio buttons + this.radioButtons.forEach((ele) => { + ele.parentElement.classList.remove("checked"); + }); + radio_button.parentElement.classList.add("checked"); + + this.fetchImageBlob(radio_button_img.src).then((blob) => { + const file = new File([blob], filename, { + type: blob.type, + }); + this.previewImage.src = radio_button_img.src; + this.previewImage.style.display = "block"; + this.selectedFile = file; + }) + + // Add the opacity style toggle here to indicate that they only need + // to upload one image or choose one from the outputs. + this.outputsSection.style.opacity = 1; + this.uploadImagesInput.style.opacity = 0.35; + }; + + if (radio_button.checked) { + this.fetchImageBlob(radio_button_img.src).then((blob) => { + const file = new File([blob], filename, { + type: blob.type, + }); + this.previewImage.src = radio_button_img.src; + this.previewImage.style.display = "block"; + this.selectedFile = file; + }) + // Add the opacity style toggle here to indicate that they only need + // to upload one image or choose one from the outputs. + this.outputsSection.style.opacity = 1; + this.uploadImagesInput.style.opacity = 0.35; + } + + this.radioButtons.push(radio_button); + + return $el(`label.output_label${radio_button.checked ? '.checked' : ''}`, { + style: { + display: "flex", + flexDirection: "column", + alignItems: "center", + justifyContent: "center", + marginBottom: "10px", + cursor: "pointer", + position: 'relative', + } + }, [radio_button_img, radio_button_text, radio_button, node_id_chip]); + }) + ); + + const header = + $el("p", { + textContent: this.radioButtons.length === 0 ? "Queue Prompt to see the outputs" : "Or choose one from the outputs (scroll to see all)", + size: 2, + color: "white", + style: { + color: 'white', + margin: '0 0 5px 0', + fontSize: '12px', + }, + }, []) + this.outputsSection.innerHTML = ""; + this.outputsSection.appendChild(header); + this.outputsSection.appendChild(new_radio_buttons); + } +} diff --git a/custom_nodes/ComfyUI-Manager/js/comfyui-share-youml.js b/custom_nodes/ComfyUI-Manager/js/comfyui-share-youml.js new file mode 100644 index 0000000000000000000000000000000000000000..80077b25a0c1f0176c4aaa4b4263cadb9a2fbdf7 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/comfyui-share-youml.js @@ -0,0 +1,568 @@ +import {app} from "../../scripts/app.js"; +import {api} from "../../scripts/api.js"; +import {ComfyDialog, $el} from "../../scripts/ui.js"; + +const BASE_URL = "https://youml.com"; +//const BASE_URL = "http://localhost:3000"; +const DEFAULT_HOMEPAGE_URL = `${BASE_URL}/?from=comfyui`; +const TOKEN_PAGE_URL = `${BASE_URL}/my-token`; +const API_ENDPOINT = `${BASE_URL}/api`; + +const style = ` + .youml-share-dialog { + overflow-y: auto; + } + .youml-share-dialog .dialog-header { + text-align: center; + color: white; + margin: 0 0 10px 0; + } + .youml-share-dialog .dialog-section { + margin-bottom: 0; + padding: 0; + border-radius: 8px; + box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05); + display: flex; + flex-direction: column; + justify-content: center; + } + .youml-share-dialog input, .youml-share-dialog textarea { + display: block; + min-width: 500px; + width: 100%; + padding: 10px; + margin: 10px 0; + border-radius: 4px; + border: 1px solid #ddd; + box-sizing: border-box; + } + .youml-share-dialog textarea { + color: var(--input-text); + background-color: var(--comfy-input-bg); + } + .youml-share-dialog .workflow-description { + min-height: 75px; + } + .youml-share-dialog label { + color: #f8f8f8; + display: block; + margin: 5px 0 0 0; + font-weight: bold; + text-decoration: none; + } + .youml-share-dialog .action-button { + padding: 10px 80px; + margin: 10px 5px; + border-radius: 4px; + border: none; + cursor: pointer; + } + .youml-share-dialog .share-button { + color: #fff; + background-color: #007bff; + } + .youml-share-dialog .close-button { + background-color: none; + } + .youml-share-dialog .action-button-panel { + text-align: right; + display: flex; + justify-content: space-between; + } + .youml-share-dialog .status-message { + color: #fd7909; + text-align: center; + padding: 5px; + font-size: 18px; + } + .youml-share-dialog .status-message a { + color: white; + } + .youml-share-dialog .output-panel { + overflow: auto; + max-height: 180px; + display: grid; + grid-template-columns: repeat(auto-fit, minmax(100px, 1fr)); + grid-template-rows: auto; + grid-column-gap: 10px; + grid-row-gap: 10px; + margin-bottom: 10px; + padding: 10px; + border-radius: 8px; + box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05); + background-color: var(--bg-color); + } + .youml-share-dialog .output-panel .output-image { + width: 100px; + height: 100px; + objectFit: cover; + borderRadius: 5px; + } + + .youml-share-dialog .output-panel .radio-button { + color:var(--fg-color); + } + .youml-share-dialog .output-panel .radio-text { + color: gray; + display: block; + font-size: 12px; + overflow-x: hidden; + text-overflow: ellipsis; + text-wrap: nowrap; + max-width: 100px; + } + .youml-share-dialog .output-panel .node-id { + color: #FBFBFD; + display: block; + background-color: rgba(0, 0, 0, 0.5); + font-size: 12px; + overflow-x: hidden; + padding: 2px 3px; + text-overflow: ellipsis; + text-wrap: nowrap; + max-width: 100px; + position: absolute; + top: 3px; + left: 3px; + border-radius: 3px; + } + .youml-share-dialog .output-panel .output-label { + display: flex; + flex-direction: column; + align-items: center; + justify-content: center; + margin-bottom: 10px; + cursor: pointer; + position: relative; + border: 5px solid transparent; + } + .youml-share-dialog .output-panel .output-label:hover { + border: 5px solid #007bff; + } + .youml-share-dialog .output-panel .output-label.checked { + border: 5px solid #007bff; + } + .youml-share-dialog .missing-output-message{ + color: #fd7909; + font-size: 16px; + margin-bottom:10px + } + .youml-share-dialog .select-output-message{ + color: white; + margin-bottom:5px + } +`; + +export class YouMLShareDialog extends ComfyDialog { + static instance = null; + + constructor() { + super(); + $el("style", { + textContent: style, + parent: document.head, + }); + this.element = $el( + "div.comfy-modal.youml-share-dialog", + { + parent: document.body, + }, + [$el("div.comfy-modal-content", {}, [...this.createLayout()])] + ); + this.selectedOutputIndex = 0; + this.selectedNodeId = null; + this.uploadedImages = []; + this.selectedFile = null; + } + + async loadToken() { + let key = "" + try { + const response = await api.fetchApi(`/manager/youml/settings`) + const settings = await response.json() + return settings.token + } catch (error) { + } + return key || ""; + } + + async saveToken(value) { + await api.fetchApi(`/manager/youml/settings`, { + method: 'POST', + headers: {'Content-Type': 'application/json'}, + body: JSON.stringify({ + token: value + }) + }); + } + + createLayout() { + // Header Section + const headerSection = $el("h3.dialog-header", { + textContent: "Share your workflow to YouML.com", + size: 3, + }); + + // Workflow Info Section + this.nameInput = $el("input", { + type: "text", + placeholder: "Name (required)", + }); + this.descriptionInput = $el("textarea.workflow-description", { + placeholder: "Description (optional, markdown supported)", + }); + const workflowMetadata = $el("div.dialog-section", {}, [ + $el("label", {}, ["Workflow info"]), + this.nameInput, + this.descriptionInput, + ]); + + // Outputs Section + this.outputsSection = $el("div.dialog-section", { + id: "selectOutputs", + }, []); + + const outputUploadSection = $el("div.dialog-section", {}, [ + $el("label", {}, ["Thumbnail"]), + this.outputsSection, + ]); + + // API Token Section + this.apiTokenInput = $el("input", { + type: "password", + placeholder: "Copy & paste your API token", + }); + const getAPITokenButton = $el("button", { + href: DEFAULT_HOMEPAGE_URL, + target: "_blank", + onclick: () => window.open(TOKEN_PAGE_URL, "_blank"), + }, ["Get your API Token"]) + + const apiTokenSection = $el("div.dialog-section", {}, [ + $el("label", {}, ["YouML API Token"]), + this.apiTokenInput, + getAPITokenButton, + ]); + + // Message Section + this.message = $el("div.status-message", {}, []); + + // Share and Close Buttons + this.shareButton = $el("button.action-button.share-button", { + type: "submit", + textContent: "Share", + onclick: () => { + this.handleShareButtonClick(); + }, + }); + + const buttonsSection = $el( + "div.action-button-panel", + {}, + [ + $el("button.action-button.close-button", { + type: "button", + textContent: "Close", + onclick: () => { + this.close(); + }, + }), + this.shareButton, + ] + ); + + // Composing the full layout + const layout = [ + headerSection, + workflowMetadata, + outputUploadSection, + apiTokenSection, + this.message, + buttonsSection, + ]; + + return layout; + } + + async fetchYoumlApi(path, options, statusText) { + if (statusText) { + this.message.textContent = statusText; + } + + const fullPath = new URL(API_ENDPOINT + path) + + const fetchOptions = Object.assign({}, options) + + fetchOptions.headers = { + ...fetchOptions.headers, + "Authorization": `Bearer ${this.apiTokenInput.value}`, + "User-Agent": "ComfyUI-Manager-Youml/1.0.0", + } + + const response = await fetch(fullPath, fetchOptions); + + if (!response.ok) { + throw new Error(response.statusText + " " + (await response.text())); + } + + if (statusText) { + this.message.textContent = ""; + } + const data = await response.json(); + return { + ok: response.ok, + statusText: response.statusText, + status: response.status, + data, + }; + } + + async uploadThumbnail(uploadFile, recipeId) { + const form = new FormData(); + form.append("file", uploadFile, uploadFile.name); + try { + const res = await this.fetchYoumlApi( + `/v1/comfy/recipes/${recipeId}/thumbnail`, + { + method: "POST", + body: form, + }, + "Uploading thumbnail..." + ); + + } catch (e) { + if (e?.response?.status === 413) { + throw new Error("File size is too large (max 20MB)"); + } else { + throw new Error("Error uploading thumbnail: " + e.message); + } + } + } + + async handleShareButtonClick() { + this.message.textContent = ""; + await this.saveToken(this.apiTokenInput.value); + try { + this.shareButton.disabled = true; + this.shareButton.textContent = "Sharing..."; + await this.share(); + } catch (e) { + alert(e.message); + } finally { + this.shareButton.disabled = false; + this.shareButton.textContent = "Share"; + } + } + + async share() { + const prompt = await app.graphToPrompt(); + const workflowJSON = prompt["workflow"]; + const workflowAPIJSON = prompt["output"]; + const form_values = { + name: this.nameInput.value, + description: this.descriptionInput.value, + }; + + if (!this.apiTokenInput.value) { + throw new Error("API token is required"); + } + + if (!this.selectedFile) { + throw new Error("Thumbnail is required"); + } + + if (!form_values.name) { + throw new Error("Title is required"); + } + + + try { + let snapshotData = null; + try { + const snapshot = await api.fetchApi(`/snapshot/get_current`) + snapshotData = await snapshot.json() + } catch (e) { + console.error("Failed to get snapshot", e) + } + + const request = { + name: this.nameInput.value, + description: this.descriptionInput.value, + workflowUiJson: JSON.stringify(workflowJSON), + workflowApiJson: JSON.stringify(workflowAPIJSON), + } + + if (snapshotData) { + request.snapshotJson = JSON.stringify(snapshotData) + } + + const response = await this.fetchYoumlApi( + "/v1/comfy/recipes", + { + method: "POST", + headers: {"Content-Type": "application/json"}, + body: JSON.stringify(request), + }, + "Uploading workflow..." + ); + + if (response.ok) { + const {id, recipePageUrl, editorPageUrl} = response.data; + if (id) { + let messagePrefix = "Workflow has been shared." + if (this.selectedFile) { + try { + await this.uploadThumbnail(this.selectedFile, id); + } catch (e) { + console.error("Thumbnail upload failed: ", e); + messagePrefix = "Workflow has been shared, but thumbnail upload failed. You can create a thumbnail on YouML later." + } + } + this.message.innerHTML = `${messagePrefix} To turn your workflow into an interactive app, ` + + `visit it on YouML`; + + this.uploadedImages = []; + this.nameInput.value = ""; + this.descriptionInput.value = ""; + this.radioButtons.forEach((ele) => { + ele.checked = false; + ele.parentElement.classList.remove("checked"); + }); + this.selectedOutputIndex = 0; + this.selectedNodeId = null; + this.selectedFile = null; + } + } + } catch (e) { + throw new Error("Error sharing workflow: " + e.message); + } + } + + async fetchImageBlob(url) { + const response = await fetch(url); + const blob = await response.blob(); + return blob; + } + + async show(potentialOutputs, potentialOutputNodes) { + const potentialOutputsToOrder = {}; + potentialOutputNodes.forEach((node, index) => { + if (node.id in potentialOutputsToOrder) { + potentialOutputsToOrder[node.id][1].push(potentialOutputs[index]); + } else { + potentialOutputsToOrder[node.id] = [node, [potentialOutputs[index]]]; + } + }) + const sortedPotentialOutputsToOrder = Object.fromEntries( + Object.entries(potentialOutputsToOrder).sort((a, b) => a[0].id - b[0].id) + ); + const sortedPotentialOutputs = [] + const sortedPotentiaOutputNodes = [] + for (const [key, value] of Object.entries(sortedPotentialOutputsToOrder)) { + sortedPotentiaOutputNodes.push(value[0]); + sortedPotentialOutputs.push(...value[1]); + } + potentialOutputNodes = sortedPotentiaOutputNodes; + potentialOutputs = sortedPotentialOutputs; + + + // If `selectedNodeId` is provided, we will select the corresponding radio + // button for the node. In addition, we move the selected radio button to + // the top of the list. + if (this.selectedNodeId) { + const index = potentialOutputNodes.findIndex(node => node.id === this.selectedNodeId); + if (index >= 0) { + this.selectedOutputIndex = index; + } + } + + this.radioButtons = []; + const newRadioButtons = $el("div.output-panel", + { + id: "selectOutput-Options", + }, + potentialOutputs.map((output, index) => { + const {node_id: nodeId} = output; + const radioButton = $el("input.radio-button", { + type: "radio", + name: "selectOutputImages", + value: index, + required: index === 0 + }, []) + let radioButtonImage; + let filename; + if (output.type === "image" || output.type === "temp") { + radioButtonImage = $el("img.output-image", { + src: `/view?filename=${output.image.filename}&subfolder=${output.image.subfolder}&type=${output.image.type}`, + }, []); + filename = output.image.filename + } else if (output.type === "output") { + radioButtonImage = $el("img.output-image", { + src: output.output.value, + }, []); + filename = output.output.filename + } else { + radioButtonImage = $el("img.output-image", { + src: "", + }, []); + } + const radioButtonText = $el("span.radio-text", {}, [output.title]) + const nodeIdChip = $el("span.node-id", {}, [`Node: ${nodeId}`]) + radioButton.checked = this.selectedOutputIndex === index; + + radioButton.onchange = async () => { + this.selectedOutputIndex = parseInt(radioButton.value); + + // Remove the "checked" class from all radio buttons + this.radioButtons.forEach((ele) => { + ele.parentElement.classList.remove("checked"); + }); + radioButton.parentElement.classList.add("checked"); + + this.fetchImageBlob(radioButtonImage.src).then((blob) => { + const file = new File([blob], filename, { + type: blob.type, + }); + this.selectedFile = file; + }) + }; + + if (radioButton.checked) { + this.fetchImageBlob(radioButtonImage.src).then((blob) => { + const file = new File([blob], filename, { + type: blob.type, + }); + this.selectedFile = file; + }) + } + + this.radioButtons.push(radioButton); + + return $el(`label.output-label${radioButton.checked ? '.checked' : ''}`, {}, + [radioButtonImage, radioButtonText, radioButton, nodeIdChip]); + }) + ); + + let header; + if (this.radioButtons.length === 0) { + header = $el("div.missing-output-message", {textContent: "Queue Prompt to see the outputs and select a thumbnail"}, []) + } else { + header = $el("div.select-output-message", {textContent: "Choose one from the outputs (scroll to see all)"}, []) + } + + this.outputsSection.innerHTML = ""; + this.outputsSection.appendChild(header); + if (this.radioButtons.length > 0) { + this.outputsSection.appendChild(newRadioButtons); + } + + this.message.innerHTML = ""; + this.message.textContent = ""; + + const token = await this.loadToken(); + this.apiTokenInput.value = token; + this.uploadedImages = []; + + this.element.style.display = "block"; + } +} diff --git a/custom_nodes/ComfyUI-Manager/js/common.js b/custom_nodes/ComfyUI-Manager/js/common.js new file mode 100644 index 0000000000000000000000000000000000000000..a45bf75f95fe53ea18018465e3a26df6d15ffa8a --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/common.js @@ -0,0 +1,166 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js"; + +export async function sleep(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +export function rebootAPI() { + if (confirm("Are you sure you'd like to reboot the server?")) { + try { + api.fetchApi("/manager/reboot"); + } + catch(exception) { + + } + return true; + } + + return false; +} + +export async function install_checked_custom_node(grid_rows, target_i, caller, mode) { + if(caller) { + let failed = ''; + + caller.disableButtons(); + + for(let i in grid_rows) { + if(!grid_rows[i].checkbox.checked && i != target_i) + continue; + + var target; + + if(grid_rows[i].data.custom_node) { + target = grid_rows[i].data.custom_node; + } + else { + target = grid_rows[i].data; + } + + caller.startInstall(target); + + try { + const response = await api.fetchApi(`/customnode/${mode}`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(target) + }); + + if(response.status == 400) { + show_message(`${mode} failed: ${target.title}`); + continue; + } + + const status = await response.json(); + app.ui.dialog.close(); + target.installed = 'True'; + continue; + } + catch(exception) { + failed += `
${target.title}`; + } + } + + if(failed != '') { + show_message(`${mode} failed: ${failed}`); + } + + await caller.invalidateControl(); + caller.updateMessage("
To apply the installed/updated/disabled/enabled custom node, please ComfyUI. And refresh browser.", 'cm-reboot-button'); + } +}; + +export var manager_instance = null; + +export function setManagerInstance(obj) { + manager_instance = obj; +} + +function isValidURL(url) { + if(url.includes('&')) + return false; + + const pattern = /^(https?|ftp):\/\/[^\s/$.?#].[^\s]*$/; + return pattern.test(url); +} + +export async function install_pip(packages) { + if(packages.includes('&')) + app.ui.dialog.show(`Invalid PIP package enumeration: '${packages}'`); + + const res = await api.fetchApi(`/customnode/install/pip?packages=${packages}`); + + if(res.status == 200) { + app.ui.dialog.show(`PIP package installation is processed.
To apply the pip packages, please click the button in ComfyUI.`); + + const rebootButton = document.getElementById('cm-reboot-button'); + const self = this; + + rebootButton.addEventListener("click", rebootAPI); + + app.ui.dialog.element.style.zIndex = 10010; + } + else { + app.ui.dialog.show(`Failed to install '${packages}'
See terminal log.`); + app.ui.dialog.element.style.zIndex = 10010; + } +} + +export async function install_via_git_url(url, manager_dialog) { + if(!url) { + return; + } + + if(!isValidURL(url)) { + app.ui.dialog.show(`Invalid Git url '${url}'`); + app.ui.dialog.element.style.zIndex = 10010; + return; + } + + app.ui.dialog.show(`Wait...

Installing '${url}'`); + app.ui.dialog.element.style.zIndex = 10010; + + const res = await api.fetchApi(`/customnode/install/git_url?url=${url}`); + + if(res.status == 200) { + app.ui.dialog.show(`'${url}' is installed
To apply the installed custom node, please ComfyUI.`); + + const rebootButton = document.getElementById('cm-reboot-button'); + const self = this; + + rebootButton.addEventListener("click", + function() { + if(rebootAPI()) { + manager_dialog.close(); + } + }); + + app.ui.dialog.element.style.zIndex = 10010; + } + else { + app.ui.dialog.show(`Failed to install '${url}'
See terminal log.`); + app.ui.dialog.element.style.zIndex = 10010; + } +} + +export async function free_models() { + let res = await api.fetchApi(`/free`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: '{}' + }); + + if(res.status == 200) { + app.ui.dialog.show('Models have been unloaded.') + } + else { + app.ui.dialog.show('Unloading of models failed.

Installed ComfyUI may be an outdated version.') + } + app.ui.dialog.element.style.zIndex = 10010; +} + +export function show_message(msg) { + app.ui.dialog.show(msg); + app.ui.dialog.element.style.zIndex = 10010; +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/js/components-manager.js b/custom_nodes/ComfyUI-Manager/js/components-manager.js new file mode 100644 index 0000000000000000000000000000000000000000..248a74d25164905a09d1d68c039f079bcdb2a7ae --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/components-manager.js @@ -0,0 +1,809 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js" +import { sleep, show_message } from "./common.js"; +import { GroupNodeConfig, GroupNodeHandler } from "../../extensions/core/groupNode.js"; +import { ComfyDialog, $el } from "../../scripts/ui.js"; + +let pack_map = {}; +let rpack_map = {}; + +export function getPureName(node) { + // group nodes/ + let category = null; + if(node.category) { + category = node.category.substring(12); + } + else { + category = node.constructor.category?.substring(12); + } + if(category) { + let purename = node.comfyClass.substring(category.length+1); + return purename; + } + else if(node.comfyClass.startsWith('workflow/')) { + return node.comfyClass.substring(9); + } + else { + return node.comfyClass; + } +} + +function isValidVersionString(version) { + const versionPattern = /^(\d+)\.(\d+)(\.(\d+))?$/; + + const match = version.match(versionPattern); + + return match !== null && + parseInt(match[1], 10) >= 0 && + parseInt(match[2], 10) >= 0 && + (!match[3] || parseInt(match[4], 10) >= 0); +} + +function register_pack_map(name, data) { + if(data.packname) { + pack_map[data.packname] = name; + rpack_map[name] = data; + } + else { + rpack_map[name] = data; + } +} + +function storeGroupNode(name, data, register=true) { + let extra = app.graph.extra; + if (!extra) app.graph.extra = extra = {}; + let groupNodes = extra.groupNodes; + if (!groupNodes) extra.groupNodes = groupNodes = {}; + groupNodes[name] = data; + + if(register) { + register_pack_map(name, data); + } +} + +export async function load_components() { + let data = await api.fetchApi('/manager/component/loads', {method: "POST"}); + let components = await data.json(); + + let start_time = Date.now(); + let failed = []; + let failed2 = []; + + for(let name in components) { + if(app.graph.extra?.groupNodes?.[name]) { + if(data) { + let data = components[name]; + + let category = data.packname; + if(data.category) { + category += "/" + data.category; + } + if(category == '') { + category = 'components'; + } + + const config = new GroupNodeConfig(name, data); + await config.registerType(category); + + register_pack_map(name, data); + continue; + } + } + + let nodeData = components[name]; + + storeGroupNode(name, nodeData); + + const config = new GroupNodeConfig(name, nodeData); + + while(true) { + try { + let category = nodeData.packname; + if(nodeData.category) { + category += "/" + nodeData.category; + } + if(category == '') { + category = 'components'; + } + + await config.registerType(category); + register_pack_map(name, nodeData); + break; + } + catch { + let elapsed_time = Date.now() - start_time; + if (elapsed_time > 5000) { + failed.push(name); + break; + } else { + await sleep(100); + } + } + } + } + + // fallback1 + for(let i in failed) { + let name = failed[i]; + + if(app.graph.extra?.groupNodes?.[name]) { + continue; + } + + let nodeData = components[name]; + + storeGroupNode(name, nodeData); + + const config = new GroupNodeConfig(name, nodeData); + while(true) { + try { + let category = nodeData.packname; + if(nodeData.workflow.category) { + category += "/" + nodeData.category; + } + if(category == '') { + category = 'components'; + } + + await config.registerType(category); + register_pack_map(name, nodeData); + break; + } + catch { + let elapsed_time = Date.now() - start_time; + if (elapsed_time > 10000) { + failed2.push(name); + break; + } else { + await sleep(100); + } + } + } + } + + // fallback2 + for(let name in failed2) { + let name = failed2[i]; + + let nodeData = components[name]; + + storeGroupNode(name, nodeData); + + const config = new GroupNodeConfig(name, nodeData); + while(true) { + try { + let category = nodeData.workflow.packname; + if(nodeData.workflow.category) { + category += "/" + nodeData.category; + } + if(category == '') { + category = 'components'; + } + + await config.registerType(category); + register_pack_map(name, nodeData); + break; + } + catch { + let elapsed_time = Date.now() - start_time; + if (elapsed_time > 30000) { + failed.push(name); + break; + } else { + await sleep(100); + } + } + } + } +} + +async function save_as_component(node, version, author, prefix, nodename, packname, category) { + let component_name = `${prefix}::${nodename}`; + + let subgraph = app.graph.extra?.groupNodes?.[component_name]; + if(!subgraph) { + subgraph = app.graph.extra?.groupNodes?.[getPureName(node)]; + } + + subgraph.version = version; + subgraph.author = author; + subgraph.datetime = Date.now(); + subgraph.packname = packname; + subgraph.category = category; + + let body = + { + name: component_name, + workflow: subgraph + }; + + pack_map[packname] = component_name; + rpack_map[component_name] = subgraph; + + const res = await api.fetchApi('/manager/component/save', { + method: "POST", + headers: { + "Content-Type": "application/json", + }, + body: JSON.stringify(body), + }); + + if(res.status == 200) { + storeGroupNode(component_name, subgraph); + const config = new GroupNodeConfig(component_name, subgraph); + + let category = body.workflow.packname; + if(body.workflow.category) { + category += "/" + body.workflow.category; + } + if(category == '') { + category = 'components'; + } + + await config.registerType(category); + + let path = await res.text(); + show_message(`Component '${component_name}' is saved into:\n${path}`); + } + else + show_message(`Failed to save component.`); +} + +async function import_component(component_name, component, mode) { + if(mode) { + let body = + { + name: component_name, + workflow: component + }; + + const res = await api.fetchApi('/manager/component/save', { + method: "POST", + headers: { "Content-Type": "application/json", }, + body: JSON.stringify(body) + }); + } + + let category = component.packname; + if(component.category) { + category += "/" + component.category; + } + if(category == '') { + category = 'components'; + } + + storeGroupNode(component_name, component); + const config = new GroupNodeConfig(component_name, component); + await config.registerType(category); +} + +function restore_to_loaded_component(component_name) { + if(rpack_map[component_name]) { + let component = rpack_map[component_name]; + storeGroupNode(component_name, component, false); + const config = new GroupNodeConfig(component_name, component); + config.registerType(component.category); + } +} + +// Using a timestamp prevents duplicate pastes and ensures the prevention of re-deletion of litegrapheditor_clipboard. +let last_paste_timestamp = null; + +function versionCompare(v1, v2) { + let ver1; + let ver2; + if(v1 && v1 != '') { + ver1 = v1.split('.'); + ver1[0] = parseInt(ver1[0]); + ver1[1] = parseInt(ver1[1]); + if(ver1.length == 2) + ver1.push(0); + else + ver1[2] = parseInt(ver2[2]); + } + else { + ver1 = [0,0,0]; + } + + if(v2 && v2 != '') { + ver2 = v2.split('.'); + ver2[0] = parseInt(ver2[0]); + ver2[1] = parseInt(ver2[1]); + if(ver2.length == 2) + ver2.push(0); + else + ver2[2] = parseInt(ver2[2]); + } + else { + ver2 = [0,0,0]; + } + + if(ver1[0] > ver2[0]) + return -1; + else if(ver1[0] < ver2[0]) + return 1; + + if(ver1[1] > ver2[1]) + return -1; + else if(ver1[1] < ver2[1]) + return 1; + + if(ver1[2] > ver2[2]) + return -1; + else if(ver1[2] < ver2[2]) + return 1; + + return 0; +} + +function checkVersion(name, component) { + let msg = ''; + if(rpack_map[name]) { + let old_version = rpack_map[name].version; + if(!old_version || old_version == '') { + msg = ` '${name}' Upgrade (V0.0 -> V${component.version})`; + } + else { + let c = versionCompare(old_version, component.version); + if(c < 0) { + msg = ` '${name}' Downgrade (V${old_version} -> V${component.version})`; + } + else if(c > 0) { + msg = ` '${name}' Upgrade (V${old_version} -> V${component.version})`; + } + else { + msg = ` '${name}' Same version (V${component.version})`; + } + } + } + else { + msg = `'${name}' NEW (V${component.version})`; + } + + return msg; +} + +function handle_import_components(components) { + let msg = 'Components:\n'; + let cnt = 0; + for(let name in components) { + let component = components[name]; + let v = checkVersion(name, component); + + if(cnt < 10) { + msg += v + '\n'; + } + else if (cnt == 10) { + msg += '...\n'; + } + else { + // do nothing + } + + cnt++; + } + + let last_name = null; + msg += '\nWill you load components?\n'; + if(confirm(msg)) { + let mode = confirm('\nWill you save components?\n(cancel=load without save)'); + + for(let name in components) { + let component = components[name]; + import_component(name, component, mode); + last_name = name; + } + + if(mode) { + show_message('Components are saved.'); + } + else { + show_message('Components are loaded.'); + } + } + + if(cnt == 1 && last_name) { + const node = LiteGraph.createNode(`workflow/${last_name}`); + node.pos = [app.canvas.graph_mouse[0], app.canvas.graph_mouse[1]]; + app.canvas.graph.add(node, false); + } +} + +function handlePaste(e) { + let data = (e.clipboardData || window.clipboardData); + const items = data.items; + for(const item of items) { + if(item.kind == 'string' && item.type == 'text/plain') { + data = data.getData("text/plain"); + try { + let json_data = JSON.parse(data); + if(json_data.kind == 'ComfyUI Components' && last_paste_timestamp != json_data.timestamp) { + last_paste_timestamp = json_data.timestamp; + handle_import_components(json_data.components); + + // disable paste node + localStorage.removeItem("litegrapheditor_clipboard", null); + } + else { + console.log('This components are already pasted: ignored'); + } + } + catch { + // nothing to do + } + } + } +} + +document.addEventListener("paste", handlePaste); + + +export class ComponentBuilderDialog extends ComfyDialog { + constructor() { + super(); + } + + clear() { + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + } + + show() { + this.invalidateControl(); + + this.element.style.display = "block"; + this.element.style.zIndex = 10001; + this.element.style.width = "500px"; + this.element.style.height = "480px"; + } + + invalidateControl() { + this.clear(); + + let self = this; + + const close_button = $el("button", { id: "cm-close-button", type: "button", textContent: "Close", onclick: () => self.close() }); + this.save_button = $el("button", + { id: "cm-save-button", type: "button", textContent: "Save", onclick: () => + { + save_as_component(self.target_node, self.version_string.value.trim(), self.author.value.trim(), self.node_prefix.value.trim(), + self.getNodeName(), self.getPackName(), self.category.value.trim()); + } + }); + + let default_nodename = getPureName(this.target_node).trim(); + + let groupNode = app.graph.extra.groupNodes[default_nodename]; + let default_packname = groupNode.packname; + if(!default_packname) { + default_packname = ''; + } + + let default_category = groupNode.category; + if(!default_category) { + default_category = ''; + } + + this.default_ver = groupNode.version; + if(!this.default_ver) { + this.default_ver = '0.0'; + } + + let default_author = groupNode.author; + if(!default_author) { + default_author = ''; + } + + let delimiterIndex = default_nodename.indexOf('::'); + let default_prefix = ""; + if(delimiterIndex != -1) { + default_prefix = default_nodename.substring(0, delimiterIndex); + default_nodename = default_nodename.substring(delimiterIndex + 2); + } + + if(!default_prefix) { + this.save_button.disabled = true; + } + + this.pack_list = this.createPackListCombo(); + + let version_string = this.createLabeledInput('input version (e.g. 1.0)', '*Version : ', this.default_ver); + this.version_string = version_string[1]; + this.version_string.disabled = true; + + let author = this.createLabeledInput('input author (e.g. Dr.Lt.Data)', 'Author : ', default_author); + this.author = author[1]; + + let node_prefix = this.createLabeledInput('input node prefix (e.g. mypack)', '*Prefix : ', default_prefix); + this.node_prefix = node_prefix[1]; + + let manual_nodename = this.createLabeledInput('input node name (e.g. MAKE_BASIC_PIPE)', 'Nodename : ', default_nodename); + this.manual_nodename = manual_nodename[1]; + + let manual_packname = this.createLabeledInput('input pack name (e.g. mypack)', 'Packname : ', default_packname); + this.manual_packname = manual_packname[1]; + + let category = this.createLabeledInput('input category (e.g. util/pipe)', 'Category : ', default_category); + this.category = category[1]; + + this.node_label = this.createNodeLabel(); + + let author_mode = this.createAuthorModeCheck(); + this.author_mode = author_mode[0]; + + const content = + $el("div.comfy-modal-content", + [ + $el("tr.cm-title", {}, [ + $el("font", {size:6, color:"white"}, [`ComfyUI-Manager: Component Builder`])] + ), + $el("br", {}, []), + $el("div.cm-menu-container", + [ + author_mode[0], + author_mode[1], + category[0], + author[0], + node_prefix[0], + manual_nodename[0], + manual_packname[0], + version_string[0], + this.pack_list, + $el("br", {}, []), + this.node_label + ]), + + $el("br", {}, []), + this.save_button, + close_button, + ] + ); + + content.style.width = '100%'; + content.style.height = '100%'; + + this.element = $el("div.comfy-modal", { id:'cm-manager-dialog', parent: document.body }, [ content ]); + } + + validateInput() { + let msg = ""; + + if(!isValidVersionString(this.version_string.value)) { + msg += 'Invalid version string: '+event.value+"\n"; + } + + if(this.node_prefix.value.trim() == '') { + msg += 'Node prefix cannot be empty\n'; + } + + if(this.manual_nodename.value.trim() == '') { + msg += 'Node name cannot be empty\n'; + } + + if(msg != '') { +// alert(msg); + } + + this.save_button.disabled = msg != ""; + } + + getPackName() { + if(this.pack_list.selectedIndex == 0) { + return this.manual_packname.value.trim(); + } + + return this.pack_list.value.trim(); + } + + getNodeName() { + if(this.manual_nodename.value.trim() != '') { + return this.manual_nodename.value.trim(); + } + + return getPureName(this.target_node); + } + + createAuthorModeCheck() { + let check = $el("input",{type:'checkbox', id:"author-mode"},[]) + const check_label = $el("label",{for:"author-mode"},["Enable author mode"]); + check_label.style.color = "var(--fg-color)"; + check_label.style.cursor = "pointer"; + check.checked = false; + + let self = this; + check.onchange = () => { + self.version_string.disabled = !check.checked; + + if(!check.checked) { + self.version_string.value = self.default_ver; + } + else { + alert('If you are not the author, it is not recommended to change the version, as it may cause component update issues.'); + } + }; + + return [check, check_label]; + } + + createNodeLabel() { + let label = $el('p'); + label.className = 'cb-node-label'; + if(this.target_node.comfyClass.includes('::')) + label.textContent = getPureName(this.target_node); + else + label.textContent = " _::" + getPureName(this.target_node); + return label; + } + + createLabeledInput(placeholder, label, value) { + let textbox = $el('input.cb-widget-input', {type:'text', placeholder:placeholder, value:value}, []); + + let self = this; + textbox.onchange = () => { + this.validateInput.call(self); + this.node_label.textContent = this.node_prefix.value + "::" + this.manual_nodename.value; + } + let row = $el('span.cb-widget', {}, [ $el('span.cb-widget-input-label', label), textbox]); + + return [row, textbox]; + } + + createPackListCombo() { + let combo = document.createElement("select"); + combo.className = "cb-widget"; + let default_packname_option = { value: '##manual', text: 'Packname: Manual' }; + + combo.appendChild($el('option', default_packname_option, [])); + for(let name in pack_map) { + combo.appendChild($el('option', { value: name, text: 'Packname: '+ name }, [])); + } + + let self = this; + combo.onchange = function () { + if(combo.selectedIndex == 0) { + self.manual_packname.disabled = false; + } + else { + self.manual_packname.disabled = true; + } + }; + + return combo; + } +} + +let orig_handleFile = app.handleFile; + +function handleFile(file) { + if (file.name?.endsWith(".json") || file.name?.endsWith(".pack")) { + const reader = new FileReader(); + reader.onload = async () => { + let is_component = false; + const jsonContent = JSON.parse(reader.result); + for(let name in jsonContent) { + let cand = jsonContent[name]; + is_component = cand.datetime && cand.version; + break; + } + + if(is_component) { + handle_import_components(jsonContent); + } + else { + orig_handleFile.call(app, file); + } + }; + reader.readAsText(file); + + return; + } + + orig_handleFile.call(app, file); +} + +app.handleFile = handleFile; + +let current_component_policy = 'workflow'; +try { + api.fetchApi('/manager/component/policy') + .then(response => response.text()) + .then(data => { current_component_policy = data; }); +} +catch {} + +function getChangedVersion(groupNodes) { + if(!Object.keys(pack_map).length || !groupNodes) + return null; + + let res = {}; + for(let component_name in groupNodes) { + let data = groupNodes[component_name]; + + if(rpack_map[component_name]) { + let v = versionCompare(data.version, rpack_map[component_name].version); + res[component_name] = v; + } + } + + return res; +} + +const loadGraphData = app.loadGraphData; +app.loadGraphData = async function () { + if(arguments.length == 0) + return await loadGraphData.apply(this, arguments); + + let graphData = arguments[0]; + let groupNodes = graphData.extra?.groupNodes; + let res = getChangedVersion(groupNodes); + + if(res) { + let target_components = null; + switch(current_component_policy) { + case 'higher': + target_components = Object.keys(res).filter(key => res[key] == 1); + break; + + case 'mine': + target_components = Object.keys(res); + break; + + default: + // do nothing + } + + if(target_components) { + for(let i in target_components) { + let component_name = target_components[i]; + let component = rpack_map[component_name]; + if(component && graphData.extra?.groupNodes) { + graphData.extra.groupNodes[component_name] = component; + } + } + } + } + else { + console.log('Empty components: policy ignored'); + } + + arguments[0] = graphData; + return await loadGraphData.apply(this, arguments); +}; + +export function set_component_policy(v) { + current_component_policy = v; +} + +let graphToPrompt = app.graphToPrompt; +app.graphToPrompt = async function () { + let p = await graphToPrompt.call(app); + try { + let groupNodes = p.workflow.extra?.groupNodes; + if(groupNodes) { + p.workflow.extra = { ... p.workflow.extra}; + + // get used group nodes + let used_group_nodes = new Set(); + for(let node of p.workflow.nodes) { + if(node.type.startsWith('workflow/')) { + used_group_nodes.add(node.type.substring(9)); + } + } + + // remove unused group nodes + let new_groupNodes = {}; + for (let key in p.workflow.extra.groupNodes) { + if (used_group_nodes.has(key)) { + new_groupNodes[key] = p.workflow.extra.groupNodes[key]; + } + } + p.workflow.extra.groupNodes = new_groupNodes; + } + } + catch(e) { + console.log(`Failed to filtering group nodes: ${e}`); + } + + return p; +} diff --git a/custom_nodes/ComfyUI-Manager/js/custom-nodes-downloader.js b/custom_nodes/ComfyUI-Manager/js/custom-nodes-downloader.js new file mode 100644 index 0000000000000000000000000000000000000000..10d893e93b1dff61e7e24825177ff0e09e4f79b0 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/custom-nodes-downloader.js @@ -0,0 +1,801 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js" +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { install_checked_custom_node, manager_instance, rebootAPI } from "./common.js"; + + +async function getCustomNodes() { + var mode = manager_instance.datasrc_combo.value; + + var skip_update = ""; + if(manager_instance.update_check_checkbox.checked) + skip_update = "&skip_update=true"; + + const response = await api.fetchApi(`/customnode/getlist?mode=${mode}${skip_update}`); + + const data = await response.json(); + return data; +} + +async function getCustomnodeMappings() { + var mode = manager_instance.datasrc_combo.value; + + const response = await api.fetchApi(`/customnode/getmappings?mode=${mode}`); + + const data = await response.json(); + return data; +} + +async function getConflictMappings() { + var mode = manager_instance.datasrc_combo.value; + + const response = await api.fetchApi(`/customnode/getmappings?mode=${mode}`); + + const data = await response.json(); + + let node_to_extensions_map = {}; + + for(let k in data) { + for(let i in data[k][0]) { + let node = data[k][0][i]; + let l = node_to_extensions_map[node]; + if(!l) { + l = []; + node_to_extensions_map[node] = l; + } + l.push(k); + } + } + + let conflict_map = {}; + for(let node in node_to_extensions_map) { + if(node_to_extensions_map[node].length > 1) { + for(let i in node_to_extensions_map[node]) { + let extension = node_to_extensions_map[node][i]; + let l = conflict_map[extension]; + + if(!l) { + l = []; + conflict_map[extension] = l; + } + + for(let j in node_to_extensions_map[node]) { + let extension2 = node_to_extensions_map[node][j]; + if(extension != extension2) + l.push([node, extension2]); + } + } + } + } + + return conflict_map; +} + +async function getUnresolvedNodesInComponent() { + try { + var mode = manager_instance.datasrc_combo.value; + + const response = await api.fetchApi(`/component/get_unresolved`); + + const data = await response.json(); + return data.nodes; + } + catch { + return []; + } +} + +export class CustomNodesInstaller extends ComfyDialog { + static instance = null; + + install_buttons = []; + message_box = null; + data = null; + + static ShowMode = { + NORMAL: 0, + MISSING_NODES: 1, + UPDATE: 2, + }; + + clear() { + this.install_buttons = []; + this.message_box = null; + this.data = null; + } + + constructor(app, manager_dialog) { + super(); + this.manager_dialog = manager_dialog; + this.search_keyword = ''; + this.element = $el("div.comfy-modal", { parent: document.body }, []); + } + + startInstall(target) { + const self = CustomNodesInstaller.instance; + + self.updateMessage(`
Installing '${target.title}'`); + } + + disableButtons() { + for(let i in this.install_buttons) { + this.install_buttons[i].disabled = true; + this.install_buttons[i].style.backgroundColor = 'gray'; + } + } + + apply_searchbox(data) { + let keyword = this.search_box.value.toLowerCase(); + for(let i in this.grid_rows) { + let data = this.grid_rows[i].data; + let content = data.author.toLowerCase() + data.description.toLowerCase() + data.title.toLowerCase() + data.reference.toLowerCase(); + + if(this.filter && this.filter != '*') { + if(this.filter == 'True' && (data.installed == 'Update' || data.installed == 'Fail')) { + this.grid_rows[i].control.style.display = null; + } + else if(this.filter != data.installed) { + this.grid_rows[i].control.style.display = 'none'; + continue; + } + } + + if(keyword == "") + this.grid_rows[i].control.style.display = null; + else if(content.includes(keyword)) { + this.grid_rows[i].control.style.display = null; + } + else { + this.grid_rows[i].control.style.display = 'none'; + } + } + } + + async filter_missing_node(data) { + const mappings = await getCustomnodeMappings(); + + + // build regex->url map + const regex_to_url = []; + for (let i in data) { + if(data[i]['nodename_pattern']) { + let item = {regex: new RegExp(data[i].nodename_pattern), url: data[i].files[0]}; + regex_to_url.push(item); + } + } + + // build name->url map + const name_to_url = {}; + for (const url in mappings) { + const names = mappings[url]; + for(const name in names[0]) { + name_to_url[names[0][name]] = url; + } + } + + const registered_nodes = new Set(); + for (let i in LiteGraph.registered_node_types) { + registered_nodes.add(LiteGraph.registered_node_types[i].type); + } + + const missing_nodes = new Set(); + const workflow = app.graph.serialize(); + const group_nodes = workflow.extra && workflow.extra.groupNodes ? workflow.extra.groupNodes : []; + let nodes = workflow.nodes; + + for (let i in group_nodes) { + let group_node = group_nodes[i]; + nodes = nodes.concat(group_node.nodes); + } + + for (let i in nodes) { + const node_type = nodes[i].type; + if(node_type.startsWith('workflow/')) + continue; + + if (!registered_nodes.has(node_type)) { + const url = name_to_url[node_type.trim()]; + if(url) + missing_nodes.add(url); + else { + for(let j in regex_to_url) { + if(regex_to_url[j].regex.test(node_type)) { + missing_nodes.add(regex_to_url[j].url); + } + } + } + } + } + + let unresolved_nodes = await getUnresolvedNodesInComponent(); + for (let i in unresolved_nodes) { + let node_type = unresolved_nodes[i]; + const url = name_to_url[node_type]; + if(url) + missing_nodes.add(url); + } + + return data.filter(node => node.files.some(file => missing_nodes.has(file))); + } + + async invalidateControl() { + this.clear(); + + // splash + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + + const msg = $el('div', {id:'custom-message'}, + [$el('br'), + 'The custom node DB is currently being updated, and updates to custom nodes are being checked for.', + $el('br'), + 'NOTE: Update only checks for extensions that have been fetched.', + $el('br')]); + msg.style.height = '100px'; + msg.style.verticalAlign = 'middle'; + msg.style.color = "var(--fg-color)"; + + this.element.appendChild(msg); + + // invalidate + let data = await getCustomNodes(); + this.data = data.custom_nodes; + this.channel = data.channel; + + this.conflict_mappings = await getConflictMappings(); + + if(this.show_mode == CustomNodesInstaller.ShowMode.MISSING_NODES) + this.data = await this.filter_missing_node(this.data); + + this.element.removeChild(msg); + + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + + this.createHeaderControls(); + await this.createGrid(); + this.apply_searchbox(this.data); + this.createBottomControls(); + } + + updateMessage(msg, btn_id) { + this.message_box.innerHTML = msg; + if(btn_id) { + const rebootButton = document.getElementById(btn_id); + const self = this; + rebootButton.addEventListener("click", + function() { + if(rebootAPI()) { + self.close(); + self.manager_dialog.close(); + } + }); + console.log(rebootButton); + } + } + + invalidate_checks(is_checked, install_state) { + if(is_checked) { + for(let i in this.grid_rows) { + let data = this.grid_rows[i].data; + let checkbox = this.grid_rows[i].checkbox; + let buttons = this.grid_rows[i].buttons; + + checkbox.disabled = data.installed != install_state; + + if(checkbox.disabled) { + for(let j in buttons) { + buttons[j].style.display = 'none'; + } + } + else { + for(let j in buttons) { + buttons[j].style.display = null; + } + } + } + + this.checkbox_all.disabled = false; + } + else { + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + if(checkbox.check) + return; // do nothing + } + + // every checkbox is unchecked -> enable all checkbox + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + let buttons = this.grid_rows[i].buttons; + checkbox.disabled = false; + + for(let j in buttons) { + buttons[j].style.display = null; + } + } + + this.checkbox_all.checked = false; + this.checkbox_all.disabled = true; + } + } + + check_all(is_checked) { + if(is_checked) { + // lookup first checked item's state + let check_state = null; + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + if(checkbox.checked) { + check_state = this.grid_rows[i].data.installed; + } + } + + if(check_state == null) + return; + + // check only same state items + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + if(this.grid_rows[i].data.installed == check_state) + checkbox.checked = true; + } + } + else { + // uncheck all + for(let i in this.grid_rows) { + let checkbox = this.grid_rows[i].checkbox; + let buttons = this.grid_rows[i].buttons; + checkbox.checked = false; + checkbox.disabled = false; + + for(let j in buttons) { + buttons[j].style.display = null; + } + } + + this.checkbox_all.disabled = true; + } + } + + async createGrid() { + var grid = document.createElement('table'); + grid.setAttribute('id', 'custom-nodes-grid'); + + this.grid_rows = {}; + + let self = this; + + var thead = document.createElement('thead'); + var tbody = document.createElement('tbody'); + + var headerRow = document.createElement('tr'); + thead.style.position = "sticky"; + thead.style.top = "0px"; + thead.style.borderCollapse = "collapse"; + thead.style.tableLayout = "fixed"; + + var header0 = document.createElement('th'); + header0.style.width = "20px"; + this.checkbox_all = $el("input",{type:'checkbox', id:'check_all'},[]); + header0.appendChild(this.checkbox_all); + this.checkbox_all.checked = false; + this.checkbox_all.disabled = true; + this.checkbox_all.addEventListener('change', function() { self.check_all.call(self, self.checkbox_all.checked); }); + + var header1 = document.createElement('th'); + header1.innerHTML = '  ID  '; + header1.style.width = "20px"; + var header2 = document.createElement('th'); + header2.innerHTML = 'Author'; + header2.style.width = "150px"; + var header3 = document.createElement('th'); + header3.innerHTML = 'Name'; + header3.style.width = "20%"; + var header4 = document.createElement('th'); + header4.innerHTML = 'Description'; + header4.style.width = "60%"; +// header4.classList.add('expandable-column'); + var header5 = document.createElement('th'); + header5.innerHTML = 'Install'; + header5.style.width = "130px"; + + header0.style.position = "sticky"; + header0.style.top = "0px"; + header1.style.position = "sticky"; + header1.style.top = "0px"; + header2.style.position = "sticky"; + header2.style.top = "0px"; + header3.style.position = "sticky"; + header3.style.top = "0px"; + header4.style.position = "sticky"; + header4.style.top = "0px"; + header5.style.position = "sticky"; + header5.style.top = "0px"; + + thead.appendChild(headerRow); + headerRow.appendChild(header0); + headerRow.appendChild(header1); + headerRow.appendChild(header2); + headerRow.appendChild(header3); + headerRow.appendChild(header4); + headerRow.appendChild(header5); + + headerRow.style.backgroundColor = "Black"; + headerRow.style.color = "White"; + headerRow.style.textAlign = "center"; + headerRow.style.width = "100%"; + headerRow.style.padding = "0"; + + grid.appendChild(thead); + grid.appendChild(tbody); + + if(this.data) + for (var i = 0; i < this.data.length; i++) { + const data = this.data[i]; + let dataRow = document.createElement('tr'); + + let data0 = document.createElement('td'); + let checkbox = $el("input",{type:'checkbox', id:`check_${i}`},[]); + data0.appendChild(checkbox); + checkbox.checked = false; + checkbox.addEventListener('change', function() { self.invalidate_checks.call(self, checkbox.checked, data.installed); }); + + var data1 = document.createElement('td'); + data1.style.textAlign = "center"; + data1.innerHTML = i+1; + + var data2 = document.createElement('td'); + data2.style.maxWidth = "100px"; + data2.className = "cm-node-author" + data2.textContent = ` ${data.author}`; + data2.style.whiteSpace = "nowrap"; + data2.style.overflow = "hidden"; + data2.style.textOverflow = "ellipsis"; + + var data3 = document.createElement('td'); + data3.style.maxWidth = "200px"; + data3.style.wordWrap = "break-word"; + data3.className = "cm-node-name" + data3.innerHTML = ` ${data.title}`; + if(data.installed == 'Fail') + data3.innerHTML = ' (IMPORT FAILED)' + data3.innerHTML; + + var data4 = document.createElement('td'); + data4.innerHTML = data.description; + data4.className = "cm-node-desc" + + let conflicts = this.conflict_mappings[data.files[0]]; + if(conflicts) { + let buf = '

Conflicted Nodes:
'; + for(let k in conflicts) { + let node_name = conflicts[k][0]; + + let extension_name = conflicts[k][1].split('/').pop(); + if(extension_name.endsWith('/')) { + extension_name = extension_name.slice(0, -1); + } + if(node_name.endsWith('.git')) { + extension_name = extension_name.slice(0, -4); + } + + buf += `${node_name} [${extension_name}], `; + } + + if(buf.endsWith(', ')) { + buf = buf.slice(0, -2); + } + buf += "

"; + data4.innerHTML += buf; + } + + var data5 = document.createElement('td'); + data5.style.textAlign = "center"; + + var installBtn = document.createElement('button'); + installBtn.className = "cm-btn-install"; + var installBtn2 = null; + var installBtn3 = null; + var installBtn4 = null; + + this.install_buttons.push(installBtn); + + switch(data.installed) { + case 'Disabled': + installBtn3 = document.createElement('button'); + installBtn3.innerHTML = 'Enable'; + installBtn3.className = "cm-btn-enable"; + installBtn3.style.backgroundColor = 'blue'; + installBtn3.style.color = 'white'; + this.install_buttons.push(installBtn3); + + installBtn.innerHTML = 'Uninstall'; + installBtn.style.backgroundColor = 'red'; + break; + + case 'Update': + installBtn2 = document.createElement('button'); + installBtn2.innerHTML = 'Update'; + installBtn2.className = "cm-btn-update"; + installBtn2.style.backgroundColor = 'blue'; + installBtn2.style.color = 'white'; + this.install_buttons.push(installBtn2); + + installBtn3 = document.createElement('button'); + installBtn3.innerHTML = 'Disable'; + installBtn3.className = "cm-btn-disable"; + installBtn3.style.backgroundColor = 'MediumSlateBlue'; + installBtn3.style.color = 'white'; + this.install_buttons.push(installBtn3); + + installBtn.innerHTML = 'Uninstall'; + installBtn.style.backgroundColor = 'red'; + break; + + case 'Fail': + installBtn4 = document.createElement('button'); + installBtn4.innerHTML = 'Try fix'; + installBtn4.className = "cm-btn-disable"; + installBtn4.style.backgroundColor = '#6495ED'; + installBtn4.style.color = 'white'; + this.install_buttons.push(installBtn4); + + case 'True': + if(manager_instance.update_check_checkbox.checked) { + installBtn2 = document.createElement('button'); + installBtn2.innerHTML = 'Try update'; + installBtn2.className = "cm-btn-update"; + installBtn2.style.backgroundColor = 'Gray'; + installBtn2.style.color = 'white'; + this.install_buttons.push(installBtn2); + } + + installBtn3 = document.createElement('button'); + installBtn3.innerHTML = 'Disable'; + installBtn3.className = "cm-btn-disable"; + installBtn3.style.backgroundColor = 'MediumSlateBlue'; + installBtn3.style.color = 'white'; + this.install_buttons.push(installBtn3); + + installBtn.innerHTML = 'Uninstall'; + installBtn.style.backgroundColor = 'red'; + break; + case 'False': + installBtn.innerHTML = 'Install'; + installBtn.style.backgroundColor = 'black'; + installBtn.style.color = 'white'; + break; + default: + installBtn.innerHTML = `Try Install`; + installBtn.style.backgroundColor = 'Gray'; + installBtn.style.color = 'white'; + } + + let j = i; + if(installBtn2 != null) { + installBtn2.style.width = "120px"; + installBtn2.addEventListener('click', function() { + install_checked_custom_node(self.grid_rows, j, CustomNodesInstaller.instance, 'update'); + }); + + data5.appendChild(installBtn2); + } + + if(installBtn3 != null) { + installBtn3.style.width = "120px"; + installBtn3.addEventListener('click', function() { + install_checked_custom_node(self.grid_rows, j, CustomNodesInstaller.instance, 'toggle_active'); + }); + + data5.appendChild(installBtn3); + } + + if(installBtn4 != null) { + installBtn4.style.width = "120px"; + installBtn4.addEventListener('click', function() { + install_checked_custom_node(self.grid_rows, j, CustomNodesInstaller.instance, 'fix'); + }); + + data5.appendChild(installBtn4); + } + + installBtn.style.width = "120px"; + installBtn.addEventListener('click', function() { + if(this.innerHTML == 'Uninstall') { + if (confirm(`Are you sure uninstall ${data.title}?`)) { + install_checked_custom_node(self.grid_rows, j, CustomNodesInstaller.instance, 'uninstall'); + } + } + else { + install_checked_custom_node(self.grid_rows, j, CustomNodesInstaller.instance, 'install'); + } + }); + + if(!data.author.startsWith('#NOTICE')){ + data5.appendChild(installBtn); + } + + if(data.installed == 'Fail' || data.author.startsWith('#NOTICE')) + dataRow.style.backgroundColor = "#880000"; + else + dataRow.style.backgroundColor = "var(--bg-color)"; + dataRow.style.color = "var(--fg-color)"; + dataRow.style.textAlign = "left"; + + dataRow.appendChild(data0); + dataRow.appendChild(data1); + dataRow.appendChild(data2); + dataRow.appendChild(data3); + dataRow.appendChild(data4); + dataRow.appendChild(data5); + tbody.appendChild(dataRow); + + let buttons = []; + if(installBtn) { + buttons.push(installBtn); + } + if(installBtn2) { + buttons.push(installBtn2); + } + if(installBtn3) { + buttons.push(installBtn3); + } + + this.grid_rows[i] = {data:data, buttons:buttons, checkbox:checkbox, control:dataRow}; + } + + const panel = document.createElement('div'); + panel.style.width = "100%"; + panel.appendChild(grid); + + function handleResize() { + const parentHeight = self.element.clientHeight; + const gridHeight = parentHeight - 200; + + grid.style.height = gridHeight + "px"; + } + window.addEventListener("resize", handleResize); + + grid.style.position = "relative"; + grid.style.display = "inline-block"; + grid.style.width = "100%"; + grid.style.height = "100%"; + grid.style.overflowY = "scroll"; + this.element.style.height = "85%"; + this.element.style.width = "80%"; + this.element.appendChild(panel); + + handleResize(); + } + + createFilterCombo() { + let combo = document.createElement("select"); + + combo.style.cssFloat = "left"; + combo.style.fontSize = "14px"; + combo.style.padding = "4px"; + combo.style.background = "black"; + combo.style.marginLeft = "2px"; + combo.style.width = "199px"; + combo.id = `combo-manger-filter`; + combo.style.borderRadius = "15px"; + + let items = + [ + { value:'*', text:'Filter: all' }, + { value:'Disabled', text:'Filter: disabled' }, + { value:'Update', text:'Filter: update' }, + { value:'True', text:'Filter: installed' }, + { value:'False', text:'Filter: not-installed' }, + { value:'Fail', text:'Filter: import failed' }, + ]; + + items.forEach(item => { + const option = document.createElement("option"); + option.value = item.value; + option.text = item.text; + combo.appendChild(option); + }); + + if(this.show_mode == CustomNodesInstaller.ShowMode.UPDATE) { + this.filter = 'Update'; + } + else if(this.show_mode == CustomNodesInstaller.ShowMode.MISSING_NODES) { + this.filter = '*'; + } + + let self = this; + combo.addEventListener('change', function(event) { + self.filter = event.target.value; + self.apply_searchbox(); + }); + + if(self.filter) { + combo.value = self.filter; + } + + return combo; + } + + createHeaderControls() { + let self = this; + this.search_box = $el('input.cm-search-filter', {type:'text', id:'manager-customnode-search-box', placeholder:'input search keyword', value:this.search_keyword}, []); + this.search_box.style.height = "25px"; + this.search_box.onkeydown = (event) => { + if (event.key === 'Enter') { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + } + if (event.key === 'Escape') { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + } + }; + + + let search_button = document.createElement("button"); + search_button.className = "cm-small-button"; + search_button.innerHTML = "Search"; + search_button.onclick = () => { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + }; + search_button.style.display = "inline-block"; + + let filter_control = this.createFilterCombo(); + filter_control.style.display = "inline-block"; + + let channel_badge = ''; + if(this.channel != 'default') { + channel_badge = $el('span', {id:'cm-channel-badge'}, [`Channel: ${this.channel} (Incomplete list)`]); + } + else { + + } + let cell = $el('td', {width:'100%'}, [filter_control, channel_badge, this.search_box, ' ', search_button]); + let search_control = $el('table', {width:'100%'}, + [ + $el('tr', {}, [cell]) + ] + ); + + cell.style.textAlign = "right"; + + this.element.appendChild(search_control); + } + + async createBottomControls() { + var close_button = document.createElement("button"); + close_button.className = "cm-small-button"; + close_button.innerHTML = "Close"; + close_button.onclick = () => { this.close(); } + close_button.style.display = "inline-block"; + + this.message_box = $el('div', {id:'custom-installer-message'}, [$el('br'), '']); + this.message_box.style.height = '60px'; + this.message_box.style.verticalAlign = 'middle'; + + this.element.appendChild(this.message_box); + this.element.appendChild(close_button); + } + + async show(show_mode) { + this.show_mode = show_mode; + + if(this.show_mode != CustomNodesInstaller.ShowMode.NORMAL) { + this.search_keyword = ''; + } + + try { + this.invalidateControl(); + + this.element.style.display = "block"; + this.element.style.zIndex = 10001; + } + catch(exception) { + app.ui.dialog.show(`Failed to get custom node list. / ${exception}`); + } + } +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/js/model-downloader.js b/custom_nodes/ComfyUI-Manager/js/model-downloader.js new file mode 100644 index 0000000000000000000000000000000000000000..9642bcd153087bbc0cb8650a457064fd7f3d243f --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/model-downloader.js @@ -0,0 +1,389 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js" +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { install_checked_custom_node, manager_instance, rebootAPI } from "./common.js"; + +async function install_model(target) { + if(ModelInstaller.instance) { + ModelInstaller.instance.startInstall(target); + + try { + const response = await api.fetchApi('/model/install', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(target) + }); + + const status = await response.json(); + app.ui.dialog.close(); + target.installed = 'True'; + return true; + } + catch(exception) { + app.ui.dialog.show(`Install failed: ${target.title} / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + await ModelInstaller.instance.invalidateControl(); + ModelInstaller.instance.updateMessage("
To apply the installed model, please click the 'Refresh' button on the main menu."); + } + } +} + +async function getModelList() { + var mode = manager_instance.datasrc_combo.value; + + const response = await api.fetchApi(`/externalmodel/getlist?mode=${mode}`); + + const data = await response.json(); + return data; +} + +export class ModelInstaller extends ComfyDialog { + static instance = null; + + install_buttons = []; + message_box = null; + data = null; + + clear() { + this.install_buttons = []; + this.message_box = null; + this.data = null; + } + + constructor(app, manager_dialog) { + super(); + this.manager_dialog = manager_dialog; + this.search_keyword = ''; + this.element = $el("div.comfy-modal", { parent: document.body }, []); + } + + createControls() { + return [ + $el("button.cm-small-button", { + type: "button", + textContent: "Close", + onclick: () => { this.close(); } + }) + ]; + } + + startInstall(target) { + const self = ModelInstaller.instance; + + self.updateMessage(`
Installing '${target.name}'`); + + for(let i in self.install_buttons) { + self.install_buttons[i].disabled = true; + self.install_buttons[i].style.backgroundColor = 'gray'; + } + } + + apply_searchbox(data) { + let keyword = this.search_box.value.toLowerCase(); + for(let i in this.grid_rows) { + let data = this.grid_rows[i].data; + let content = data.name.toLowerCase() + data.type.toLowerCase() + data.base.toLowerCase() + data.description.toLowerCase(); + + if(this.filter && this.filter != '*') { + if(this.filter != data.installed) { + this.grid_rows[i].control.style.display = 'none'; + continue; + } + } + + if(keyword == "") + this.grid_rows[i].control.style.display = null; + else if(content.includes(keyword)) { + this.grid_rows[i].control.style.display = null; + } + else { + this.grid_rows[i].control.style.display = 'none'; + } + } + } + + async invalidateControl() { + this.clear(); + this.data = (await getModelList()).models; + + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + + await this.createHeaderControls(); + + if(this.search_keyword) { + this.search_box.value = this.search_keyword; + } + + await this.createGrid(); + await this.createBottomControls(); + + this.apply_searchbox(this.data); + } + + updateMessage(msg, btn_id) { + this.message_box.innerHTML = msg; + if(btn_id) { + const rebootButton = document.getElementById(btn_id); + const self = this; + rebootButton.addEventListener("click", + function() { + if(rebootAPI()) { + self.close(); + self.manager_dialog.close(); + } + }); + } + } + + async createGrid(models_json) { + var grid = document.createElement('table'); + grid.setAttribute('id', 'external-models-grid'); + + var thead = document.createElement('thead'); + var tbody = document.createElement('tbody'); + + var headerRow = document.createElement('tr'); + thead.style.position = "sticky"; + thead.style.top = "0px"; + thead.style.borderCollapse = "collapse"; + thead.style.tableLayout = "fixed"; + + var header1 = document.createElement('th'); + header1.innerHTML = '  ID  '; + header1.style.width = "20px"; + var header2 = document.createElement('th'); + header2.innerHTML = 'Type'; + header2.style.width = "100px"; + var header3 = document.createElement('th'); + header3.innerHTML = 'Base'; + header3.style.width = "100px"; + var header4 = document.createElement('th'); + header4.innerHTML = 'Name'; + header4.style.width = "30%"; + var header5 = document.createElement('th'); + header5.innerHTML = 'Filename'; + header5.style.width = "20%"; + header5.style.tableLayout = "fixed"; + var header6 = document.createElement('th'); + header6.innerHTML = 'Description'; + header6.style.width = "50%"; + var header_down = document.createElement('th'); + header_down.innerHTML = 'Download'; + header_down.style.width = "50px"; + + thead.appendChild(headerRow); + headerRow.appendChild(header1); + headerRow.appendChild(header2); + headerRow.appendChild(header3); + headerRow.appendChild(header4); + headerRow.appendChild(header5); + headerRow.appendChild(header6); + headerRow.appendChild(header_down); + + headerRow.style.backgroundColor = "Black"; + headerRow.style.color = "White"; + headerRow.style.textAlign = "center"; + headerRow.style.width = "100%"; + headerRow.style.padding = "0"; + + grid.appendChild(thead); + grid.appendChild(tbody); + + this.grid_rows = {}; + + if(this.data) + for (var i = 0; i < this.data.length; i++) { + const data = this.data[i]; + var dataRow = document.createElement('tr'); + var data1 = document.createElement('td'); + data1.style.textAlign = "center"; + data1.innerHTML = i+1; + var data2 = document.createElement('td'); + data2.innerHTML = ` ${data.type}`; + var data3 = document.createElement('td'); + data3.innerHTML = ` ${data.base}`; + var data4 = document.createElement('td'); + data4.className = "cm-node-name"; + data4.innerHTML = ` ${data.name}`; + var data5 = document.createElement('td'); + data5.className = "cm-node-filename"; + data5.innerHTML = ` ${data.filename}`; + data5.style.wordBreak = "break-all"; + var data6 = document.createElement('td'); + data6.className = "cm-node-desc"; + data6.innerHTML = data.description; + data6.style.wordBreak = "break-all"; + var data_install = document.createElement('td'); + var installBtn = document.createElement('button'); + data_install.style.textAlign = "center"; + + installBtn.innerHTML = 'Install'; + this.install_buttons.push(installBtn); + + switch(data.installed) { + case 'True': + installBtn.innerHTML = 'Installed'; + installBtn.style.backgroundColor = 'green'; + installBtn.style.color = 'white'; + installBtn.disabled = true; + break; + default: + installBtn.innerHTML = 'Install'; + installBtn.style.backgroundColor = 'black'; + installBtn.style.color = 'white'; + break; + } + + installBtn.style.width = "100px"; + + installBtn.addEventListener('click', function() { + install_model(data); + }); + + data_install.appendChild(installBtn); + + dataRow.style.backgroundColor = "var(--bg-color)"; + dataRow.style.color = "var(--fg-color)"; + dataRow.style.textAlign = "left"; + + dataRow.appendChild(data1); + dataRow.appendChild(data2); + dataRow.appendChild(data3); + dataRow.appendChild(data4); + dataRow.appendChild(data5); + dataRow.appendChild(data6); + dataRow.appendChild(data_install); + tbody.appendChild(dataRow); + + this.grid_rows[i] = {data:data, control:dataRow}; + } + + let self = this; + const panel = document.createElement('div'); + panel.style.width = "100%"; + panel.appendChild(grid); + + function handleResize() { + const parentHeight = self.element.clientHeight; + const gridHeight = parentHeight - 200; + + grid.style.height = gridHeight + "px"; + } + window.addEventListener("resize", handleResize); + + grid.style.position = "relative"; + grid.style.display = "inline-block"; + grid.style.width = "100%"; + grid.style.height = "100%"; + grid.style.overflowY = "scroll"; + this.element.style.height = "85%"; + this.element.style.width = "80%"; + this.element.appendChild(panel); + + handleResize(); + } + + createFilterCombo() { + let combo = document.createElement("select"); + + combo.style.cssFloat = "left"; + combo.style.fontSize = "14px"; + combo.style.padding = "4px"; + combo.style.background = "black"; + combo.style.marginLeft = "2px"; + combo.style.width = "199px"; + combo.id = `combo-manger-filter`; + combo.style.borderRadius = "15px"; + + let items = + [ + { value:'*', text:'Filter: all' }, + { value:'True', text:'Filter: installed' }, + { value:'False', text:'Filter: not-installed' }, + ]; + + items.forEach(item => { + const option = document.createElement("option"); + option.value = item.value; + option.text = item.text; + combo.appendChild(option); + }); + + let self = this; + combo.addEventListener('change', function(event) { + self.filter = event.target.value; + self.apply_searchbox(); + }); + + return combo; + } + + createHeaderControls() { + let self = this; + this.search_box = $el('input.cm-search-filter', {type:'text', id:'manager-model-search-box', placeholder:'input search keyword', value:this.search_keyword}, []); + this.search_box.style.height = "25px"; + this.search_box.onkeydown = (event) => { + if (event.key === 'Enter') { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + } + if (event.key === 'Escape') { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + } + }; + + let search_button = document.createElement("button"); + search_button.className = "cm-small-button"; + search_button.innerHTML = "Search"; + search_button.onclick = () => { + self.search_keyword = self.search_box.value; + self.apply_searchbox(); + }; + search_button.style.display = "inline-block"; + + let filter_control = this.createFilterCombo(); + filter_control.style.display = "inline-block"; + + let cell = $el('td', {width:'100%'}, [filter_control, this.search_box, ' ', search_button]); + let search_control = $el('table', {width:'100%'}, + [ + $el('tr', {}, [cell]) + ] + ); + + cell.style.textAlign = "right"; + this.element.appendChild(search_control); + } + + async createBottomControls() { + var close_button = document.createElement("button"); + close_button.className = "cm-small-button"; + close_button.innerHTML = "Close"; + close_button.onclick = () => { this.close(); } + close_button.style.display = "inline-block"; + + this.message_box = $el('div', {id:'custom-download-message'}, [$el('br'), '']); + this.message_box.style.height = '60px'; + this.message_box.style.verticalAlign = 'middle'; + + this.element.appendChild(this.message_box); + this.element.appendChild(close_button); + } + + async show() { + try { + this.invalidateControl(); + this.element.style.display = "block"; + this.element.style.zIndex = 10001; + } + catch(exception) { + app.ui.dialog.show(`Failed to get external model list. / ${exception}`); + } + } +} diff --git a/custom_nodes/ComfyUI-Manager/js/node_fixer.js b/custom_nodes/ComfyUI-Manager/js/node_fixer.js new file mode 100644 index 0000000000000000000000000000000000000000..94b4c74770916abaf90dc46592c54fc954b29b07 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/node_fixer.js @@ -0,0 +1,211 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js"; + +let double_click_policy = "copy-all"; + +api.fetchApi('/manager/dbl_click/policy') + .then(response => response.text()) + .then(data => set_double_click_policy(data)); + +export function set_double_click_policy(mode) { + double_click_policy = mode; +} + +function addMenuHandler(nodeType, cb) { + const getOpts = nodeType.prototype.getExtraMenuOptions; + nodeType.prototype.getExtraMenuOptions = function () { + const r = getOpts.apply(this, arguments); + cb.apply(this, arguments); + return r; + }; +} + +function distance(node1, node2) { + let dx = (node1.pos[0] + node1.size[0]/2) - (node2.pos[0] + node2.size[0]/2); + let dy = (node1.pos[1] + node1.size[1]/2) - (node2.pos[1] + node2.size[1]/2); + return Math.sqrt(dx * dx + dy * dy); +} + +function lookup_nearest_nodes(node) { + let nearest_distance = Infinity; + let nearest_node = null; + for(let other of app.graph._nodes) { + if(other === node) + continue; + + let dist = distance(node, other); + if (dist < nearest_distance && dist < 1000) { + nearest_distance = dist; + nearest_node = other; + } + } + + return nearest_node; +} + +function lookup_nearest_inputs(node) { + let input_map = {}; + + for(let i in node.inputs) { + let input = node.inputs[i]; + + if(input.link || input_map[input.type]) + continue; + + input_map[input.type] = {distance: Infinity, input_name: input.name, node: null, slot: null}; + } + + let x = node.pos[0]; + let y = node.pos[1] + node.size[1]/2; + + for(let other of app.graph._nodes) { + if(other === node || !other.outputs) + continue; + + let dx = x - (other.pos[0] + other.size[0]); + let dy = y - (other.pos[1] + other.size[1]/2); + + if(dx < 0) + continue; + + let dist = Math.sqrt(dx * dx + dy * dy); + + for(let input_type in input_map) { + for(let j in other.outputs) { + let output = other.outputs[j]; + if(output.type == input_type) { + if(input_map[input_type].distance > dist) { + input_map[input_type].distance = dist; + input_map[input_type].node = other; + input_map[input_type].slot = parseInt(j); + } + } + } + } + } + + let res = {}; + for (let i in input_map) { + if (input_map[i].node) { + res[i] = input_map[i]; + } + } + + return res; +} + +function connect_inputs(nearest_inputs, node) { + for(let i in nearest_inputs) { + let info = nearest_inputs[i]; + info.node.connect(info.slot, node.id, info.input_name); + } +} + +function node_info_copy(src, dest, connect_both) { + // copy input connections + for(let i in src.inputs) { + let input = src.inputs[i]; + if(input.link) { + let link = app.graph.links[input.link]; + let src_node = app.graph.getNodeById(link.origin_id); + src_node.connect(link.origin_slot, dest.id, input.name); + } + } + + // copy output connections + if(connect_both) { + let output_links = {}; + for(let i in src.outputs) { + let output = src.outputs[i]; + if(output.links) { + let links = []; + for(let j in output.links) { + links.push(app.graph.links[output.links[j]]); + } + output_links[output.name] = links; + } + } + + for(let i in dest.outputs) { + let links = output_links[dest.outputs[i].name]; + if(links) { + for(let j in links) { + let link = links[j]; + let target_node = app.graph.getNodeById(link.target_id); + dest.connect(parseInt(i), target_node, link.target_slot); + } + } + } + } + + app.graph.afterChange(); +} + +app.registerExtension({ + name: "Comfy.Manager.NodeFixer", + + async nodeCreated(node, app) { + let orig_dblClick = node.onDblClick; + node.onDblClick = function (e, pos, self) { + orig_dblClick?.apply?.(this, arguments); + + if((!node.inputs && !node.outputs) || pos[1] > 0) + return; + + switch(double_click_policy) { + case "copy-all": + case "copy-input": + { + if(node.inputs?.some(x => x.link != null) || node.outputs?.some(x => x.links != null && x.links.length > 0) ) + return; + + let src_node = lookup_nearest_nodes(node); + if(src_node) + node_info_copy(src_node, node, double_click_policy == "copy-all"); + } + break; + case "possible-input": + { + let nearest_inputs = lookup_nearest_inputs(node); + if(nearest_inputs) + connect_inputs(nearest_inputs, node); + } + break; + case "dual": + { + if(pos[0] < node.size[0]/2) { + // left: possible-input + let nearest_inputs = lookup_nearest_inputs(node); + if(nearest_inputs) + connect_inputs(nearest_inputs, node); + } + else { + // right: copy-all + if(node.inputs?.some(x => x.link != null) || node.outputs?.some(x => x.links != null && x.links.length > 0) ) + return; + + let src_node = lookup_nearest_nodes(node); + if(src_node) + node_info_copy(src_node, node, true); + } + } + break; + } + } + }, + + beforeRegisterNodeDef(nodeType, nodeData, app) { + addMenuHandler(nodeType, function (_, options) { + options.push({ + content: "Fix node (recreate)", + callback: () => { + let new_node = LiteGraph.createNode(nodeType.comfyClass); + new_node.pos = [this.pos[0], this.pos[1]]; + app.canvas.graph.add(new_node, false); + node_info_copy(this, new_node); + app.canvas.graph.remove(this); + }, + }); + }); + } +}); diff --git a/custom_nodes/ComfyUI-Manager/js/snapshot.js b/custom_nodes/ComfyUI-Manager/js/snapshot.js new file mode 100644 index 0000000000000000000000000000000000000000..2e26df55f7b8de57402bade2c36bb880e86f5a86 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/snapshot.js @@ -0,0 +1,292 @@ +import { app } from "../../scripts/app.js"; +import { api } from "../../scripts/api.js" +import { ComfyDialog, $el } from "../../scripts/ui.js"; +import { manager_instance, rebootAPI } from "./common.js"; + + +async function restore_snapshot(target) { + if(SnapshotManager.instance) { + try { + const response = await api.fetchApi(`/snapshot/restore?target=${target}`, { cache: "no-store" }); + if(response.status == 400) { + app.ui.dialog.show(`Restore snapshot failed: ${target.title} / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + } + + app.ui.dialog.close(); + return true; + } + catch(exception) { + app.ui.dialog.show(`Restore snapshot failed: ${target.title} / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + await SnapshotManager.instance.invalidateControl(); + SnapshotManager.instance.updateMessage("
To apply the snapshot, please ComfyUI. And refresh browser.", 'cm-reboot-button'); + } + } +} + +async function remove_snapshot(target) { + if(SnapshotManager.instance) { + try { + const response = await api.fetchApi(`/snapshot/remove?target=${target}`, { cache: "no-store" }); + if(response.status == 400) { + app.ui.dialog.show(`Remove snapshot failed: ${target.title} / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + } + + app.ui.dialog.close(); + return true; + } + catch(exception) { + app.ui.dialog.show(`Restore snapshot failed: ${target.title} / ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + await SnapshotManager.instance.invalidateControl(); + } + } +} + +async function save_current_snapshot() { + try { + const response = await api.fetchApi('/snapshot/save', { cache: "no-store" }); + app.ui.dialog.close(); + return true; + } + catch(exception) { + app.ui.dialog.show(`Backup snapshot failed: ${exception}`); + app.ui.dialog.element.style.zIndex = 10010; + return false; + } + finally { + await SnapshotManager.instance.invalidateControl(); + SnapshotManager.instance.updateMessage("
Current snapshot saved."); + } +} + +async function getSnapshotList() { + const response = await api.fetchApi(`/snapshot/getlist`); + const data = await response.json(); + return data; +} + +export class SnapshotManager extends ComfyDialog { + static instance = null; + + restore_buttons = []; + message_box = null; + data = null; + + clear() { + this.restore_buttons = []; + this.message_box = null; + this.data = null; + } + + constructor(app, manager_dialog) { + super(); + this.manager_dialog = manager_dialog; + this.search_keyword = ''; + this.element = $el("div.comfy-modal", { parent: document.body }, []); + } + + async remove_item() { + caller.disableButtons(); + + await caller.invalidateControl(); + } + + createControls() { + return [ + $el("button.cm-small-button", { + type: "button", + textContent: "Close", + onclick: () => { this.close(); } + }) + ]; + } + + startRestore(target) { + const self = SnapshotManager.instance; + + self.updateMessage(`
Restore snapshot '${target.name}'`); + + for(let i in self.restore_buttons) { + self.restore_buttons[i].disabled = true; + self.restore_buttons[i].style.backgroundColor = 'gray'; + } + } + + async invalidateControl() { + this.clear(); + this.data = (await getSnapshotList()).items; + + while (this.element.children.length) { + this.element.removeChild(this.element.children[0]); + } + + await this.createGrid(); + await this.createBottomControls(); + } + + updateMessage(msg, btn_id) { + this.message_box.innerHTML = msg; + if(btn_id) { + const rebootButton = document.getElementById(btn_id); + const self = this; + rebootButton.onclick = function() { + if(rebootAPI()) { + self.close(); + self.manager_dialog.close(); + } + }; + } + } + + async createGrid(models_json) { + var grid = document.createElement('table'); + grid.setAttribute('id', 'snapshot-list-grid'); + + var thead = document.createElement('thead'); + var tbody = document.createElement('tbody'); + + var headerRow = document.createElement('tr'); + thead.style.position = "sticky"; + thead.style.top = "0px"; + thead.style.borderCollapse = "collapse"; + thead.style.tableLayout = "fixed"; + + var header1 = document.createElement('th'); + header1.innerHTML = '  ID  '; + header1.style.width = "20px"; + var header2 = document.createElement('th'); + header2.innerHTML = 'Datetime'; + header2.style.width = "100%"; + var header_button = document.createElement('th'); + header_button.innerHTML = 'Action'; + header_button.style.width = "100px"; + + thead.appendChild(headerRow); + headerRow.appendChild(header1); + headerRow.appendChild(header2); + headerRow.appendChild(header_button); + + headerRow.style.backgroundColor = "Black"; + headerRow.style.color = "White"; + headerRow.style.textAlign = "center"; + headerRow.style.width = "100%"; + headerRow.style.padding = "0"; + + grid.appendChild(thead); + grid.appendChild(tbody); + + this.grid_rows = {}; + + if(this.data) + for (var i = 0; i < this.data.length; i++) { + const data = this.data[i]; + var dataRow = document.createElement('tr'); + var data1 = document.createElement('td'); + data1.style.textAlign = "center"; + data1.innerHTML = i+1; + var data2 = document.createElement('td'); + data2.innerHTML = ` ${data}`; + var data_button = document.createElement('td'); + data_button.style.textAlign = "center"; + + var restoreBtn = document.createElement('button'); + restoreBtn.innerHTML = 'Restore'; + restoreBtn.style.width = "100px"; + restoreBtn.style.backgroundColor = 'blue'; + + restoreBtn.addEventListener('click', function() { + restore_snapshot(data); + }); + + var removeBtn = document.createElement('button'); + removeBtn.innerHTML = 'Remove'; + removeBtn.style.width = "100px"; + removeBtn.style.backgroundColor = 'red'; + + removeBtn.addEventListener('click', function() { + remove_snapshot(data); + }); + + data_button.appendChild(restoreBtn); + data_button.appendChild(removeBtn); + + dataRow.style.backgroundColor = "var(--bg-color)"; + dataRow.style.color = "var(--fg-color)"; + dataRow.style.textAlign = "left"; + + dataRow.appendChild(data1); + dataRow.appendChild(data2); + dataRow.appendChild(data_button); + tbody.appendChild(dataRow); + + this.grid_rows[i] = {data:data, control:dataRow}; + } + + let self = this; + const panel = document.createElement('div'); + panel.style.width = "100%"; + panel.appendChild(grid); + + function handleResize() { + const parentHeight = self.element.clientHeight; + const gridHeight = parentHeight - 200; + + grid.style.height = gridHeight + "px"; + } + window.addEventListener("resize", handleResize); + + grid.style.position = "relative"; + grid.style.display = "inline-block"; + grid.style.width = "100%"; + grid.style.height = "100%"; + grid.style.overflowY = "scroll"; + this.element.style.height = "85%"; + this.element.style.width = "80%"; + this.element.appendChild(panel); + + handleResize(); + } + + async createBottomControls() { + var close_button = document.createElement("button"); + close_button.className = "cm-small-button"; + close_button.innerHTML = "Close"; + close_button.onclick = () => { this.close(); } + close_button.style.display = "inline-block"; + + var save_button = document.createElement("button"); + save_button.className = "cm-small-button"; + save_button.innerHTML = "Save snapshot"; + save_button.onclick = () => { save_current_snapshot(); } + save_button.style.display = "inline-block"; + save_button.style.horizontalAlign = "right"; + + this.message_box = $el('div', {id:'custom-download-message'}, [$el('br'), '']); + this.message_box.style.height = '60px'; + this.message_box.style.verticalAlign = 'middle'; + + this.element.appendChild(this.message_box); + this.element.appendChild(close_button); + this.element.appendChild(save_button); + } + + async show() { + try { + this.invalidateControl(); + this.element.style.display = "block"; + this.element.style.zIndex = 10001; + } + catch(exception) { + app.ui.dialog.show(`Failed to get external model list. / ${exception}`); + } + } +} diff --git a/custom_nodes/ComfyUI-Manager/js/terminal.js b/custom_nodes/ComfyUI-Manager/js/terminal.js new file mode 100644 index 0000000000000000000000000000000000000000..091959fa5f60ae880b1f19f5ea014c56d9f56415 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/js/terminal.js @@ -0,0 +1,81 @@ +import {app} from "../../scripts/app.js"; +import {ComfyWidgets} from "../../scripts/widgets.js"; +// Node that add notes to your project + +let terminal_node; +let log_mode = false; + +app.registerExtension({ + name: "Comfy.Manager.Terminal", + + registerCustomNodes() { + class TerminalNode { + color = "#222222"; + bgcolor = "#000000"; + groupcolor = LGraphCanvas.node_colors.black.groupcolor; + constructor() { + this.logs = []; + + if (!this.properties) { + this.properties = {}; + this.properties.text=""; + } + + ComfyWidgets.STRING(this, "", ["", {default:this.properties.text, multiline: true}], app) + ComfyWidgets.BOOLEAN(this, "mode", ["", {default:true, label_on:'Logging', label_off:'Stop'}], app) + ComfyWidgets.INT(this, "lines", ["", {default:500, min:10, max:10000, steps:1}], app) + + let self = this; + Object.defineProperty(this.widgets[1], 'value', { + set: (v) => { + api.fetchApi(`/manager/terminal?mode=${v}`, {}); + log_mode = v; + }, + get: () => { + return log_mode; + } + }); + + this.serialize_widgets = false; + this.isVirtualNode = true; + + if(terminal_node) { + try { + terminal_node.widgets[0].value = 'The output of this node is disabled because another terminal node has appeared.'; + } + catch {} + } + terminal_node = this; + } + } + + // Load default visibility + LiteGraph.registerNodeType( + "Terminal Log //CM", + Object.assign(TerminalNode, { + title_mode: LiteGraph.NORMAL_TITLE, + title: "Terminal Log (Manager)", + collapsable: true, + }) + ); + + TerminalNode.category = "utils"; + }, +}); + + +import { api } from "../../scripts/api.js"; + +function terminalFeedback(event) { + if(terminal_node) { + terminal_node.logs.push(event.detail.data); + if(terminal_node.logs.length > terminal_node.widgets[2].value) { + terminal_node.logs.shift(); + if(terminal_node.logs[0] == '' || terminal_node.logs[0] == '\n') + terminal_node.logs.shift(); + } + terminal_node.widgets[0].value = [...terminal_node.logs].reverse().join('').trim(); + } +} + +api.addEventListener("manager-terminal-feedback", terminalFeedback); diff --git a/custom_nodes/ComfyUI-Manager/json-checker.py b/custom_nodes/ComfyUI-Manager/json-checker.py new file mode 100644 index 0000000000000000000000000000000000000000..a6c1317ff0a7f5fc80b18a79a6b39034b315671d --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/json-checker.py @@ -0,0 +1,23 @@ +import json +import argparse + +def check_json_syntax(file_path): + try: + with open(file_path, 'r') as file: + json_str = file.read() + json.loads(json_str) + print(f"[ OK ] {file_path}") + except json.JSONDecodeError as e: + print(f"[FAIL] {file_path}\n\n {e}\n") + except FileNotFoundError: + print(f"[FAIL] {file_path}\n\n File not found\n") + +def main(): + parser = argparse.ArgumentParser(description="JSON File Syntax Checker") + parser.add_argument("file_path", type=str, help="Path to the JSON file for syntax checking") + + args = parser.parse_args() + check_json_syntax(args.file_path) + +if __name__ == "__main__": + main() diff --git a/custom_nodes/ComfyUI-Manager/misc/Impact.pack b/custom_nodes/ComfyUI-Manager/misc/Impact.pack new file mode 100644 index 0000000000000000000000000000000000000000..93fd32847cc827929cb6a0987466aaf2628c3145 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/misc/Impact.pack @@ -0,0 +1,444 @@ +{ + "Impact::MAKE_BASIC_PIPE": { + "category": "", + "config": { + "1": { + "input": { + "text": { + "name": "Positive prompt" + } + } + }, + "2": { + "input": { + "text": { + "name": "Negative prompt" + } + } + } + }, + "datetime": 1705418802481, + "external": [], + "links": [ + [ + 0, + 1, + 1, + 0, + 1, + "CLIP" + ], + [ + 0, + 1, + 2, + 0, + 1, + "CLIP" + ], + [ + 0, + 0, + 3, + 0, + 1, + "MODEL" + ], + [ + 0, + 1, + 3, + 1, + 1, + "CLIP" + ], + [ + 0, + 2, + 3, + 2, + 1, + "VAE" + ], + [ + 1, + 0, + 3, + 3, + 3, + "CONDITIONING" + ], + [ + 2, + 0, + 3, + 4, + 4, + "CONDITIONING" + ] + ], + "nodes": [ + { + "flags": {}, + "index": 0, + "mode": 0, + "order": 0, + "outputs": [ + { + "links": [], + "name": "MODEL", + "shape": 3, + "slot_index": 0, + "type": "MODEL" + }, + { + "links": [], + "name": "CLIP", + "shape": 3, + "slot_index": 1, + "type": "CLIP" + }, + { + "links": [], + "name": "VAE", + "shape": 3, + "slot_index": 2, + "type": "VAE" + } + ], + "pos": [ + 550, + 360 + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "size": { + "0": 315, + "1": 98 + }, + "type": "CheckpointLoaderSimple", + "widgets_values": [ + "SDXL/sd_xl_base_1.0_0.9vae.safetensors" + ] + }, + { + "flags": {}, + "index": 1, + "inputs": [ + { + "link": null, + "name": "clip", + "type": "CLIP" + } + ], + "mode": 0, + "order": 1, + "outputs": [ + { + "links": [], + "name": "CONDITIONING", + "shape": 3, + "slot_index": 0, + "type": "CONDITIONING" + } + ], + "pos": [ + 940, + 480 + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "size": { + "0": 263, + "1": 99 + }, + "title": "Positive", + "type": "CLIPTextEncode", + "widgets_values": [ + "" + ] + }, + { + "flags": {}, + "index": 2, + "inputs": [ + { + "link": null, + "name": "clip", + "type": "CLIP" + } + ], + "mode": 0, + "order": 2, + "outputs": [ + { + "links": [], + "name": "CONDITIONING", + "shape": 3, + "slot_index": 0, + "type": "CONDITIONING" + } + ], + "pos": [ + 940, + 640 + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "size": { + "0": 263, + "1": 99 + }, + "title": "Negative", + "type": "CLIPTextEncode", + "widgets_values": [ + "" + ] + }, + { + "flags": {}, + "index": 3, + "inputs": [ + { + "link": null, + "name": "model", + "type": "MODEL" + }, + { + "link": null, + "name": "clip", + "type": "CLIP" + }, + { + "link": null, + "name": "vae", + "type": "VAE" + }, + { + "link": null, + "name": "positive", + "type": "CONDITIONING" + }, + { + "link": null, + "name": "negative", + "type": "CONDITIONING" + } + ], + "mode": 0, + "order": 3, + "outputs": [ + { + "links": null, + "name": "basic_pipe", + "shape": 3, + "slot_index": 0, + "type": "BASIC_PIPE" + } + ], + "pos": [ + 1320, + 360 + ], + "properties": { + "Node name for S&R": "ToBasicPipe" + }, + "size": { + "0": 241.79998779296875, + "1": 106 + }, + "type": "ToBasicPipe" + } + ], + "packname": "Impact", + "version": "1.0" + }, + "Impact::SIMPLE_DETAILER_PIPE": { + "category": "", + "config": { + "0": { + "output": { + "0": { + "visible": false + }, + "1": { + "visible": false + } + } + }, + "2": { + "input": { + "Select to add LoRA": { + "visible": false + }, + "Select to add Wildcard": { + "visible": false + }, + "wildcard": { + "visible": false + } + } + } + }, + "datetime": 1705419147116, + "external": [], + "links": [ + [ + null, + 0, + 2, + 0, + 6, + "BASIC_PIPE" + ], + [ + 0, + 0, + 2, + 1, + 13, + "BBOX_DETECTOR" + ], + [ + 1, + 0, + 2, + 2, + 15, + "SAM_MODEL" + ] + ], + "nodes": [ + { + "flags": {}, + "index": 0, + "mode": 0, + "order": 2, + "outputs": [ + { + "links": [], + "name": "BBOX_DETECTOR", + "shape": 3, + "type": "BBOX_DETECTOR" + }, + { + "links": null, + "name": "SEGM_DETECTOR", + "shape": 3, + "type": "SEGM_DETECTOR" + } + ], + "pos": [ + 590, + 830 + ], + "properties": { + "Node name for S&R": "UltralyticsDetectorProvider" + }, + "size": { + "0": 315, + "1": 78 + }, + "type": "UltralyticsDetectorProvider", + "widgets_values": [ + "bbox/Eyeful_v1.pt" + ] + }, + { + "flags": {}, + "index": 1, + "mode": 0, + "order": 3, + "outputs": [ + { + "links": [], + "name": "SAM_MODEL", + "shape": 3, + "type": "SAM_MODEL" + } + ], + "pos": [ + 590, + 960 + ], + "properties": { + "Node name for S&R": "SAMLoader" + }, + "size": { + "0": 315, + "1": 82 + }, + "type": "SAMLoader", + "widgets_values": [ + "sam_vit_b_01ec64.pth", + "AUTO" + ] + }, + { + "flags": {}, + "index": 2, + "inputs": [ + { + "link": null, + "name": "basic_pipe", + "type": "BASIC_PIPE" + }, + { + "link": null, + "name": "bbox_detector", + "slot_index": 1, + "type": "BBOX_DETECTOR" + }, + { + "link": null, + "name": "sam_model_opt", + "slot_index": 2, + "type": "SAM_MODEL" + }, + { + "link": null, + "name": "segm_detector_opt", + "type": "SEGM_DETECTOR" + }, + { + "link": null, + "name": "detailer_hook", + "type": "DETAILER_HOOK" + } + ], + "mode": 0, + "order": 5, + "outputs": [ + { + "links": null, + "name": "detailer_pipe", + "shape": 3, + "type": "DETAILER_PIPE" + } + ], + "pos": [ + 1044, + 812 + ], + "properties": { + "Node name for S&R": "BasicPipeToDetailerPipe" + }, + "size": { + "0": 400, + "1": 204 + }, + "type": "BasicPipeToDetailerPipe", + "widgets_values": [ + "", + "Select the LoRA to add to the text", + "Select the Wildcard to add to the text" + ] + } + ], + "packname": "Impact", + "version": "1.0" + } +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/misc/custom-nodes.jpg b/custom_nodes/ComfyUI-Manager/misc/custom-nodes.jpg new file mode 100644 index 0000000000000000000000000000000000000000..10482f1b7b55a617f3c9cabba55b318b5ab4d714 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/custom-nodes.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/misc/main.jpg b/custom_nodes/ComfyUI-Manager/misc/main.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ec31f76fcaf06cbd2b2b51d98f4d244bcd34fcb6 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/main.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/misc/main.png b/custom_nodes/ComfyUI-Manager/misc/main.png new file mode 100644 index 0000000000000000000000000000000000000000..910da417bf2cbab3828a9beea5ef83ff3b86bbb5 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/main.png differ diff --git a/custom_nodes/ComfyUI-Manager/misc/menu.jpg b/custom_nodes/ComfyUI-Manager/misc/menu.jpg new file mode 100644 index 0000000000000000000000000000000000000000..bf98d4217cae0e36f923d9ff86a0163a244713db Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/menu.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/misc/missing-list.png b/custom_nodes/ComfyUI-Manager/misc/missing-list.png new file mode 100644 index 0000000000000000000000000000000000000000..f1cc4fd2cf3681b48a43a6cc9eededa85ea9697e Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/missing-list.png differ diff --git a/custom_nodes/ComfyUI-Manager/misc/missing-menu.png b/custom_nodes/ComfyUI-Manager/misc/missing-menu.png new file mode 100644 index 0000000000000000000000000000000000000000..5e74744b1c46c079841aa5ae60a6c0f891aea5d1 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/missing-menu.png differ diff --git a/custom_nodes/ComfyUI-Manager/misc/models.png b/custom_nodes/ComfyUI-Manager/misc/models.png new file mode 100644 index 0000000000000000000000000000000000000000..9e985fb498d528218ec0f3ada8508c2abd6a6926 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/models.png differ diff --git a/custom_nodes/ComfyUI-Manager/misc/nickname.jpg b/custom_nodes/ComfyUI-Manager/misc/nickname.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e3cfdcac5f0be077c5f90543b0c480b87b0f2a1d Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/nickname.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/misc/portable-install.png b/custom_nodes/ComfyUI-Manager/misc/portable-install.png new file mode 100644 index 0000000000000000000000000000000000000000..1771745132f0d541d17da70f5f131cef651cdada Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/portable-install.png differ diff --git a/custom_nodes/ComfyUI-Manager/misc/share-setting.jpg b/custom_nodes/ComfyUI-Manager/misc/share-setting.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0ceacf2cdccfe49f7b1a54e5c7aac83232df2504 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/share-setting.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/misc/share.jpg b/custom_nodes/ComfyUI-Manager/misc/share.jpg new file mode 100644 index 0000000000000000000000000000000000000000..97c0ae7de58265351e8cd4dfbb9489fedc41abb5 Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/share.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/misc/snapshot.jpg b/custom_nodes/ComfyUI-Manager/misc/snapshot.jpg new file mode 100644 index 0000000000000000000000000000000000000000..33269564bbd2b286994c78a92bf09905d251907b Binary files /dev/null and b/custom_nodes/ComfyUI-Manager/misc/snapshot.jpg differ diff --git a/custom_nodes/ComfyUI-Manager/model-list.json b/custom_nodes/ComfyUI-Manager/model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..e8b34a9315e1a95aefd43e0836363e16a34964ff --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/model-list.json @@ -0,0 +1,1999 @@ +{ + "models": [ + { + "name": "TAESDXL Decoder", + "type": "TAESD", + "base": "SDXL", + "save_path": "vae_approx", + "description": "(SDXL Verison) To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "reference": "https://github.com/madebyollin/taesd", + "filename": "taesdxl_decoder.pth", + "url": "https://github.com/madebyollin/taesd/raw/main/taesdxl_decoder.pth" + }, + { + "name": "TAESDXL Encoder", + "type": "TAESD", + "base": "SDXL", + "save_path": "vae_approx", + "description": "(SDXL Verison) To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "reference": "https://github.com/madebyollin/taesd", + "filename": "taesdxl_encoder.pth", + "url": "https://github.com/madebyollin/taesd/raw/main/taesdxl_encoder.pth" + }, + { + "name": "TAESD Decoder", + "type": "TAESD", + "base": "SD1.x", + "save_path": "vae_approx", + "description": "To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "reference": "https://github.com/madebyollin/taesd", + "filename": "taesd_decoder.pth", + "url": "https://github.com/madebyollin/taesd/raw/main/taesd_decoder.pth" + }, + { + "name": "TAESD Encoder", + "type": "TAESD", + "base": "SD1.x", + "save_path": "vae_approx", + "description": "To view the preview in high quality while running samples in ComfyUI, you will need this model.", + "reference": "https://github.com/madebyollin/taesd", + "filename": "taesd_encoder.pth", + "url": "https://github.com/madebyollin/taesd/raw/main/taesd_encoder.pth" + }, + { + "name": "RealESRGAN x2", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "RealESRGAN x2 upscaler model", + "reference": "https://huggingface.co/ai-forever/Real-ESRGAN", + "filename": "RealESRGAN_x2.pth", + "url": "https://huggingface.co/ai-forever/Real-ESRGAN/resolve/main/RealESRGAN_x2.pth" + }, + { + "name": "RealESRGAN x4", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "RealESRGAN x4 upscaler model", + "reference": "https://huggingface.co/ai-forever/Real-ESRGAN", + "filename": "RealESRGAN_x4.pth", + "url": "https://huggingface.co/ai-forever/Real-ESRGAN/resolve/main/RealESRGAN_x4.pth" + }, + { + "name": "ESRGAN x4", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "ESRGAN x4 upscaler model", + "reference": "https://huggingface.co/Afizi/ESRGAN_4x.pth", + "filename": "ESRGAN_4x.pth", + "url": "https://huggingface.co/Afizi/ESRGAN_4x.pth/resolve/main/ESRGAN_4x.pth" + }, + { + "name": "4x_foolhardy_Remacri", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "4x_foolhardy_Remacri upscaler model", + "reference": "https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri", + "filename": "4x_foolhardy_Remacri.pth", + "url": "https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri/resolve/main/4x_foolhardy_Remacri.pth" + }, + { + "name": "4x-AnimeSharp", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "4x-AnimeSharp upscaler model", + "reference": "https://huggingface.co/Kim2091/AnimeSharp/", + "filename": "4x-AnimeSharp.pth", + "url": "https://huggingface.co/Kim2091/AnimeSharp/resolve/main/4x-AnimeSharp.pth" + }, + { + "name": "4x-UltraSharp", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "4x-UltraSharp upscaler model", + "reference": "https://huggingface.co/Kim2091/UltraSharp/", + "filename": "4x-UltraSharp.pth", + "url": "https://huggingface.co/Kim2091/UltraSharp/resolve/main/4x-UltraSharp.pth" + }, + { + "name": "4x_NMKD-Siax_200k", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "4x_NMKD-Siax_200k upscaler model", + "reference": "https://huggingface.co/gemasai/4x_NMKD-Siax_200k", + "filename": "4x_NMKD-Siax_200k.pth", + "url": "https://huggingface.co/gemasai/4x_NMKD-Siax_200k/resolve/main/4x_NMKD-Siax_200k.pth" + }, + { + "name": "8x_NMKD-Superscale_150000_G", + "type": "upscale", + "base": "upscale", + "save_path": "default", + "description": "8x_NMKD-Superscale_150000_G upscaler model", + "reference": "https://huggingface.co/uwg/upscaler", + "filename": "8x_NMKD-Superscale_150000_G.pth", + "url": "https://huggingface.co/uwg/upscaler/resolve/main/ESRGAN/8x_NMKD-Superscale_150000_G.pth" + }, + { + "name": "LDSR(Latent Diffusion Super Resolution)", + "type": "upscale", + "base": "upscale", + "save_path": "upscale_models/ldsr", + "description": "LDSR upscale model. Through the [a/ComfyUI-Flowty-LDSR](https://github.com/flowtyone/ComfyUI-Flowty-LDSR) extension, the upscale model can be utilized.", + "reference": "https://github.com/CompVis/latent-diffusion", + "filename": "last.ckpt", + "url": "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" + }, + { + "name": "stabilityai/stable-diffusion-x4-upscaler", + "type": "checkpoints", + "base": "upscale", + "save_path": "checkpoints/upscale", + "description": "[3.53GB] This upscaling model is a latent text-guided diffusion model and should be used with SD_4XUpscale_Conditioning and KSampler.", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler", + "filename": "x4-upscaler-ema.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.safetensors" + }, + { + "name": "Inswapper-fp16 (face swap)", + "type": "insightface", + "base" : "inswapper", + "save_path": "insightface", + "description": "[264MB] Checkpoint of the insightface swapper model\n(used by ComfyUI-FaceSwap, comfyui-reactor-node, CharacterFaceSwap,\nComfyUI roop and comfy_mtb)", + "reference": "https://github.com/facefusion/facefusion-assets", + "filename": "inswapper_128_fp16.onnx", + "url": "https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128_fp16.onnx" + }, + { + "name": "Inswapper (face swap)", + "type": "insightface", + "base" : "inswapper", + "save_path": "insightface", + "description": "[529MB] Checkpoint of the insightface swapper model\n(used by ComfyUI-FaceSwap, comfyui-reactor-node, CharacterFaceSwap,\nComfyUI roop and comfy_mtb)", + "reference": "https://github.com/facefusion/facefusion-assets", + "filename": "inswapper_128.onnx", + "url": "https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx" + }, + { + "name": "Deepbump", + "type": "deepbump", + "base": "deepbump", + "save_path": "deepbump", + "description": "Checkpoint of the deepbump model to generate height and normal maps textures from an image (requires comfy_mtb)", + "reference": "https://github.com/HugoTini/DeepBump", + "filename": "deepbump256.onnx", + "url": "https://github.com/HugoTini/DeepBump/raw/master/deepbump256.onnx" + }, + { + "name": "GFPGAN 1.3", + "type": "face_restore", + "base": "face_restore", + "save_path": "face_restore", + "description": "Face restoration", + "reference": "https://github.com/TencentARC/GFPGAN", + "filename": "GFPGANv1.3.pth", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth" + }, + { + "name": "GFPGAN 1.4", + "type": "face_restore", + "base": "face_restore", + "save_path": "face_restore", + "description": "Face restoration", + "reference": "https://github.com/TencentARC/GFPGAN", + "filename": "GFPGANv1.4.pth", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" + }, + { + "name": "RestoreFormer", + "type": "face_restore", + "base": "face_restore", + "save_path": "face_restore", + "description": "Face restoration", + "reference": "https://github.com/TencentARC/GFPGAN", + "filename": "RestoreFormer.pth", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/RestoreFormer.pth" + }, + { + "name": "Stable Video Diffusion Image-to-Video", + "type": "checkpoints", + "base": "SVD", + "save_path": "checkpoints/SVD", + "description": "Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.\nNOTE: 14 frames @ 576x1024", + "reference": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid", + "filename": "svd.safetensors", + "url": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/resolve/main/svd.safetensors" + }, + { + "name": "stabilityai/Stable Zero123", + "type": "zero123", + "base": "zero123", + "save_path": "checkpoints/zero123", + "description": "Stable Zero123 is a model for view-conditioned image generation based on [a/Zero123](https://github.com/cvlab-columbia/zero123).", + "reference": "https://huggingface.co/stabilityai/stable-zero123", + "filename": "stable_zero123.ckpt", + "url": "https://huggingface.co/stabilityai/stable-zero123/resolve/main/stable_zero123.ckpt" + }, + { + "name": "Stable Video Diffusion Image-to-Video (XT)", + "type": "checkpoints", + "base": "SVD", + "save_path": "checkpoints/SVD", + "description": "Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.\nNOTE: 25 frames @ 576x1024 ", + "reference": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt", + "filename": "svd_xt.safetensors", + "url": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors" + }, + { + "name": "negative_hand Negative Embedding", + "type": "embeddings", + "base": "SD1.5", + "save_path": "default", + "description": "If you use this embedding with negatives, you can solve the issue of damaging your hands.", + "reference": "https://civitai.com/models/56519/negativehand-negative-embedding", + "filename": "negative_hand-neg.pt", + "url": "https://civitai.com/api/download/models/60938" + }, + { + "name": "bad_prompt Negative Embedding", + "type": "embeddings", + "base": "SD1.5", + "save_path": "default", + "description": "The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding.", + "reference": "https://civitai.com/models/55700/badprompt-negative-embedding", + "filename": "bad_prompt_version2-neg.pt", + "url": "https://civitai.com/api/download/models/60095" + }, + { + "name": "Deep Negative V1.75", + "type": "embeddings", + "base": "SD1.5", + "save_path": "default", + "description": "These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.", + "reference": "https://civitai.com/models/4629/deep-negative-v1x", + "filename": "ng_deepnegative_v1_75t.pt", + "url": "https://civitai.com/api/download/models/5637" + }, + { + "name": "EasyNegative", + "type": "embeddings", + "base": "SD1.5", + "save_path": "default", + "description": "This embedding should be used in your NEGATIVE prompt. Adjust the strength as desired (seems to scale well without any distortions), the strength required may vary based on positive and negative prompts.", + "reference": "https://civitai.com/models/7808/easynegative", + "filename": "easynegative.safetensors", + "url": "https://civitai.com/api/download/models/9208" + }, + + { + "name": "stabilityai/Stable Cascade: stage_a.safetensors (VAE)", + "type": "VAE", + "base": "Stable Cascade", + "save_path": "vae/Stable-Cascade", + "description": "[73.7MB] Stable Cascade: stage_a", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_a.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_a.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: effnet_encoder.safetensors (VAE)", + "type": "VAE", + "base": "Stable Cascade", + "save_path": "vae/Stable-Cascade", + "description": "[81.5MB] Stable Cascade: effnet_encoder.\nVAE encoder for stage_c latent.", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "effnet_encoder.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/effnet_encoder.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[6.25GB] Stable Cascade: stage_b", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b_bf16.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[3.13GB] Stable Cascade: stage_b/bf16", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[2.8GB] Stable Cascade: stage_b/lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b_lite.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_lite.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[1.4GB] Stable Cascade: stage_b/bf16,lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b_lite_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_lite_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[14.4GB] Stable Cascade: stage_c", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c_bf16.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[7.18GB] Stable Cascade: stage_c/bf16", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[4.12GB] Stable Cascade: stage_c/lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c_lite.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_lite.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[2.06GB] Stable Cascade: stage_c/bf16,lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c_lite_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_lite_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: text_encoder (CLIP)", + "type": "clip", + "base": "Stable Cascade", + "save_path": "clip/Stable-Cascade", + "description": "[1.39GB] Stable Cascade: text_encoder", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "model.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/text_encoder/model.safetensors" + }, + + { + "name": "SDXL-Turbo 1.0 (fp16)", + "type": "checkpoints", + "base": "SDXL", + "save_path": "checkpoints/SDXL-TURBO", + "description": "[6.9GB] SDXL-Turbo 1.0 fp16", + "reference": "https://huggingface.co/stabilityai/sdxl-turbo", + "filename": "sd_xl_turbo_1.0_fp16.safetensors", + "url": "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors" + }, + { + "name": "SDXL-Turbo 1.0", + "type": "checkpoints", + "base": "SDXL", + "save_path": "checkpoints/SDXL-TURBO", + "description": "[13.9GB] SDXL-Turbo 1.0", + "reference": "https://huggingface.co/stabilityai/sdxl-turbo", + "filename": "sd_xl_turbo_1.0.safetensors", + "url": "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0.safetensors" + }, + { + "name": "sd_xl_base_1.0_0.9vae.safetensors", + "type": "checkpoints", + "base": "SDXL", + "save_path": "default", + "description": "Stable Diffusion XL base model (VAE 0.9)", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0", + "filename": "sd_xl_base_1.0_0.9vae.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors" + }, + { + "name": "sd_xl_base_1.0.safetensors", + "type": "checkpoints", + "base": "SDXL", + "save_path": "default", + "description": "Stable Diffusion XL base model", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0", + "filename": "sd_xl_base_1.0.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors" + }, + { + "name": "sd_xl_refiner_1.0_0.9vae.safetensors", + "type": "checkpoints", + "base": "SDXL", + "save_path": "default", + "description": "Stable Diffusion XL refiner model (VAE 0.9)", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0", + "filename": "sd_xl_refiner_1.0_0.9vae.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors" + }, + { + "name": "stable-diffusion-xl-refiner-1.0", + "type": "checkpoints", + "base": "SDXL", + "save_path": "default", + "description": "Stable Diffusion XL refiner model", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0", + "filename": "sd_xl_refiner_1.0.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors" + }, + { + "name": "diffusers/stable-diffusion-xl-1.0-inpainting-0.1 (UNET/fp16)", + "type": "unet", + "base": "SDXL", + "save_path": "unet/xl-inpaint-0.1", + "description": "[5.14GB] Stable Diffusion XL inpainting model 0.1. You need UNETLoader instead of CheckpointLoader.", + "reference": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1", + "filename": "diffusion_pytorch_model.fp16.safetensors", + "url": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1/resolve/main/unet/diffusion_pytorch_model.fp16.safetensors" + }, + { + "name": "diffusers/stable-diffusion-xl-1.0-inpainting-0.1 (UNET)", + "type": "unet", + "base": "SDXL", + "save_path": "unet/xl-inpaint-0.1", + "description": "[10.3GB] Stable Diffusion XL inpainting model 0.1. You need UNETLoader instead of CheckpointLoader.", + "reference": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1", + "filename": "diffusion_pytorch_model.safetensors", + "url": "https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1/resolve/main/unet/diffusion_pytorch_model.safetensors" + }, + { + "name": "sd_xl_offset_example-lora_1.0.safetensors", + "type": "lora", + "base": "SDXL", + "save_path": "default", + "description": "Stable Diffusion XL offset LoRA", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0", + "filename": "sd_xl_offset_example-lora_1.0.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" + }, + { + "name": "v1-5-pruned-emaonly.ckpt", + "type": "checkpoints", + "base": "SD1.5", + "save_path": "default", + "description": "Stable Diffusion 1.5 base model", + "reference": "https://huggingface.co/runwayml/stable-diffusion-v1-5", + "filename": "v1-5-pruned-emaonly.ckpt", + "url": "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt" + }, + { + "name": "v2-1_512-ema-pruned.safetensors", + "type": "checkpoints", + "base": "SD2", + "save_path": "default", + "description": "Stable Diffusion 2 base model (512)", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-2-1-base", + "filename": "v2-1_512-ema-pruned.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors" + }, + { + "name": "v2-1_768-ema-pruned.safetensors", + "type": "checkpoints", + "base": "SD2", + "save_path": "default", + "description": "Stable Diffusion 2 base model (768)", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-2-1", + "filename": "v2-1_768-ema-pruned.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors" + }, + { + "name": "AbyssOrangeMix2 (hard)", + "type": "checkpoints", + "base": "SD1.5", + "save_path": "default", + "description": "AbyssOrangeMix2 - hard version (anime style)", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "filename": "AbyssOrangeMix2_hard.safetensors", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors" + }, + { + "name": "AbyssOrangeMix3 A1", + "type": "checkpoints", + "base": "SD1.5", + "save_path": "default", + "description": "AbyssOrangeMix3 - A1 (anime style)", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "filename": "AOM3A1_orangemixs.safetensors", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1_orangemixs.safetensors" + }, + { + "name": "AbyssOrangeMix3 A3", + "type": "checkpoints", + "base": "SD1.5", + "save_path": "default", + "description": "AbyssOrangeMix - A3 (anime style)", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "filename": "AOM3A3_orangemixs.safetensors", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3_orangemixs.safetensors" + }, + { + "name": "Anything v3 (fp16; pruned)", + "type": "checkpoints", + "base": "SD1.5", + "save_path": "default", + "description": "Anything v3 (anime style)", + "reference": "https://huggingface.co/Linaqruf/anything-v3.0", + "filename": "anything-v3-fp16-pruned.safetensors", + "url": "https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors" + }, + { + "name": "Waifu Diffusion 1.5 Beta3 (fp16)", + "type": "checkpoints", + "base": "SD2.1", + "save_path": "default", + "description": "Waifu Diffusion 1.5 Beta3", + "reference": "https://huggingface.co/waifu-diffusion/wd-1-5-beta3", + "filename": "wd-illusion-fp16.safetensors", + "url": "https://huggingface.co/waifu-diffusion/wd-1-5-beta3/resolve/main/wd-illusion-fp16.safetensors" + }, + { + "name": "illuminatiDiffusionV1_v11 unCLIP model", + "type": "unclip", + "base": "SD2.1", + "save_path": "default", + "description": "Mix model (SD2.1 unCLIP + illuminatiDiffusionV1_v11)", + "reference": "https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP", + "filename": "illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP/resolve/main/illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors" + }, + { + "name": "Waifu Diffusion 1.5 unCLIP model", + "type": "unclip", + "base": "SD2.1", + "save_path": "default", + "description": "Mix model (SD2.1 unCLIP + Waifu Diffusion 1.5)", + "reference": "https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP", + "filename": "wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP/resolve/main/wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors" + }, + { + "name": "sdxl_vae.safetensors", + "type": "VAE", + "base": "SDXL VAE", + "save_path": "default", + "description": "SDXL-VAE", + "reference": "https://huggingface.co/stabilityai/sdxl-vae", + "filename": "sdxl_vae.safetensors", + "url": "https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors" + }, + { + "name": "vae-ft-mse-840000-ema-pruned", + "type": "VAE", + "base": "SD1.5 VAE", + "save_path": "default", + "description": "vae-ft-mse-840000-ema-pruned", + "reference": "https://huggingface.co/stabilityai/sd-vae-ft-mse-original", + "filename": "vae-ft-mse-840000-ema-pruned.safetensors", + "url": "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" + }, + { + "name": "orangemix.vae", + "type": "VAE", + "base": "SD1.5 VAE", + "save_path": "default", + "description": "orangemix vae model", + "reference": "https://huggingface.co/WarriorMama777/OrangeMixs", + "filename": "orangemix.vae.pt", + "url": "https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt" + }, + { + "name": "kl-f8-anime2", + "type": "VAE", + "base": "SD2.1 VAE", + "save_path": "default", + "description": "kl-f8-anime2 vae model", + "reference": "https://huggingface.co/hakurei/waifu-diffusion-v1-4", + "filename": "kl-f8-anime2.ckpt", + "url": "https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt" + }, + { + "name": "OpenAI Consistency Decoder", + "type": "VAE", + "base": "SD1.5 VAE", + "save_path": "vae/openai_consistency_decoder", + "description": "[2.3GB] OpenAI Consistency Decoder. Improved decoding for stable diffusion vaes.", + "reference": "https://github.com/openai/consistencydecoder", + "filename": "decoder.pt", + "url": "https://openaipublic.azureedge.net/diff-vae/c9cebd3132dd9c42936d803e33424145a748843c8f716c0814838bdc8a2fe7cb/decoder.pt" + }, + { + "name": "LCM LoRA SD1.5", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/lcm/SD1.5", + "description": "Latent Consistency LoRA for SD1.5", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-sdv1-5", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "name": "LCM LoRA SSD-1B", + "type": "lora", + "base": "SSD-1B", + "save_path": "loras/lcm/SSD-1B", + "description": "Latent Consistency LoRA for SSD-1B", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-ssd-1b", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/latent-consistency/lcm-lora-ssd-1b/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "name": "LCM LoRA SDXL", + "type": "lora", + "base": "SSD-1B", + "save_path": "loras/lcm/SDXL", + "description": "Latent Consistency LoRA for SDXL", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-sdxl", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/latent-consistency/lcm-lora-sdxl/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "name": "Segmind-Vega", + "type": "checkpoints", + "base": "segmind-vega", + "save_path": "checkpoints/segmind-vega", + "description": "The Segmind-Vega Model is a distilled version of the Stable Diffusion XL (SDXL), offering a remarkable 70% reduction in size and an impressive 100% speedup while retaining high-quality text-to-image generation capabilities.", + "reference": "https://huggingface.co/segmind/Segmind-Vega", + "filename": "segmind-vega.safetensors", + "url": "https://huggingface.co/segmind/Segmind-Vega/resolve/main/segmind-vega.safetensors" + }, + { + "name": "Segmind-VegaRT - Latent Consistency Model (LCM) LoRA of Segmind-Vega", + "type": "lora", + "base": "segmind-vega", + "save_path": "loras/segmind-vega", + "description": "Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.", + "reference": "https://huggingface.co/segmind/Segmind-VegaRT", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/segmind/Segmind-VegaRT/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "name": "Theovercomer8's Contrast Fix (SD2.1)", + "type": "lora", + "base": "SD2.1", + "save_path": "default", + "description": "LORA: Theovercomer8's Contrast Fix (SD2.1)", + "reference": "https://civitai.com/models/8765/theovercomer8s-contrast-fix-sd15sd21-768", + "filename": "theovercomer8sContrastFix_sd21768.safetensors", + "url": "https://civitai.com/api/download/models/10350" + }, + { + "name": "Theovercomer8's Contrast Fix (SD1.5)", + "type": "lora", + "base": "SD1.5", + "save_path": "default", + "description": "LORA: Theovercomer8's Contrast Fix (SD1.5)", + "reference": "https://civitai.com/models/8765/theovercomer8s-contrast-fix-sd15sd21-768", + "filename": "theovercomer8sContrastFix_sd15.safetensors", + "url": "https://civitai.com/api/download/models/10638" + }, + { + "name": "T2I-Adapter (depth)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for depth", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_depth_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth" + }, + { + "name": "T2I-Adapter (seg)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for seg", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_seg_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth" + }, + { + "name": "T2I-Adapter (sketch)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for sketch", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_sketch_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth" + }, + { + "name": "T2I-Adapter (keypose)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for keypose", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_keypose_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth" + }, + { + "name": "T2I-Adapter (openpose)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for openpose", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_openpose_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth" + }, + { + "name": "T2I-Adapter (color)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for color", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_color_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth" + }, + { + "name": "T2I-Adapter (canny)", + "type": "T2I-Adapter", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter for canny", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_canny_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth" + }, + { + "name": "T2I-Style model", + "type": "T2I-Style", + "base": "SD1.5", + "save_path": "default", + "description": "ControlNet T2I-Adapter style model. Need to download CLIPVision model.", + "reference": "https://huggingface.co/TencentARC/T2I-Adapter", + "filename": "t2iadapter_style_sd14v1.pth", + "url": "https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth" + }, + { + "name": "CiaraRowles/TemporalNet2", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "TemporalNet was a ControlNet model designed to enhance the temporal consistency of generated outputs", + "reference": "https://huggingface.co/CiaraRowles/TemporalNet2", + "filename": "temporalnetversion2.safetensors", + "url": "https://huggingface.co/CiaraRowles/TemporalNet2/resolve/main/temporalnetversion2.safetensors" + }, + { + "name": "CiaraRowles/TemporalNet1XL (1.0)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "controlnet/TemporalNet1XL", + "description": "This is TemporalNet1XL, it is a re-train of the controlnet TemporalNet1 with Stable Diffusion XL.", + "reference": "https://huggingface.co/CiaraRowles/controlnet-temporalnet-sdxl-1.0", + "filename": "diffusion_pytorch_model.safetensors", + "url": "https://huggingface.co/CiaraRowles/controlnet-temporalnet-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors" + }, + { + "name": "CLIPVision model (stabilityai/clip_vision_g)", + "type": "clip_vision", + "base": "vit-g", + "save_path": "clip_vision", + "description": "[3.69GB] clip_g vision model", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "clip_vision_g.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/revision/clip_vision_g.safetensors" + }, + { + "name": "CLIPVision model (openai/clip-vit-large)", + "type": "clip_vision", + "base": "ViT-L", + "save_path": "clip_vision", + "description": "[1.7GB] CLIPVision model (needed for styles model)", + "reference": "https://huggingface.co/openai/clip-vit-large-patch14", + "filename": "clip-vit-large-patch14.bin", + "url": "https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors" + }, + { + "name": "CLIPVision model (IP-Adapter) CLIP-ViT-H-14-laion2B-s32B-b79K", + "type": "clip_vision", + "base": "ViT-H", + "save_path": "clip_vision", + "description": "[2.5GB] CLIPVision model (needed for IP-Adapter)", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors" + }, + { + "name": "CLIPVision model (IP-Adapter) CLIP-ViT-bigG-14-laion2B-39B-b160k", + "type": "clip_vision", + "base": "ViT-G", + "save_path": "clip_vision", + "description": "[3.69GB] CLIPVision model (needed for IP-Adapter)", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors" + }, + { + "name": "stabilityai/control-lora-canny-rank128.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: canny rank128", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-canny-rank128.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-canny-rank128.safetensors" + }, + { + "name": "stabilityai/control-lora-depth-rank128.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: depth rank128", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-depth-rank128.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-depth-rank128.safetensors" + }, + { + "name": "stabilityai/control-lora-recolor-rank128.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: recolor rank128", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-recolor-rank128.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-recolor-rank128.safetensors" + }, + { + "name": "stabilityai/control-lora-sketch-rank128-metadata.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: sketch rank128 metadata", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-sketch-rank128-metadata.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank128/control-lora-sketch-rank128-metadata.safetensors" + }, + { + "name": "stabilityai/control-lora-canny-rank256.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: canny rank256", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-canny-rank256.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors" + }, + { + "name": "stabilityai/control-lora-depth-rank256.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: depth rank256", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-depth-rank256.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors" + }, + { + "name": "stabilityai/control-lora-recolor-rank256.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: recolor rank256", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-recolor-rank256.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-recolor-rank256.safetensors" + }, + { + "name": "stabilityai/control-lora-sketch-rank256.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "Control-LoRA: sketch rank256", + "reference": "https://huggingface.co/stabilityai/control-lora", + "filename": "control-lora-sketch-rank256.safetensors", + "url": "https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-sketch-rank256.safetensors" + }, + + { + "name": "kohya-ss/ControlNet-LLLite: SDXL Canny Anime", + "type": "controlnet", + "base": "SDXL", + "save_path": "custom_nodes/ControlNet-LLLite-ComfyUI/models", + "description": "[46.2MB] An extremely compactly designed controlnet model (a.k.a. ControlNet-LLLite). Note: The model structure is highly experimental and may be subject to change in the future.", + "reference": "https://huggingface.co/kohya-ss/controlnet-lllite", + "filename": "controllllite_v01032064e_sdxl_canny_anime.safetensors", + "url": "https://huggingface.co/kohya-ss/controlnet-lllite/resolve/main/controllllite_v01032064e_sdxl_canny_anime.safetensors" + }, + + { + "name": "SDXL-controlnet: OpenPose (v2)", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "ControlNet openpose model for SDXL", + "reference": "https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0", + "filename": "OpenPoseXL2.safetensors", + "url": "https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/resolve/main/OpenPoseXL2.safetensors" + }, + { + "name": "controlnet-SargeZT/controlnet-sd-xl-1.0-softedge-dexined", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "ControlNet softedge model for SDXL", + "reference": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-softedge-dexined", + "filename": "controlnet-sd-xl-1.0-softedge-dexined.safetensors", + "url": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-softedge-dexined/resolve/main/controlnet-sd-xl-1.0-softedge-dexined.safetensors" + }, + { + "name": "controlnet-SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe", + "type": "controlnet", + "base": "SDXL", + "save_path": "default", + "description": "ControlNet depth-zoe model for SDXL", + "reference": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe", + "filename": "depth-zoe-xl-v1.0-controlnet.safetensors", + "url": "https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-depth-16bit-zoe/resolve/main/depth-zoe-xl-v1.0-controlnet.safetensors" + }, + + { + "name": "ControlNet-v1-1 (ip2p; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (ip2p)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11e_sd15_ip2p_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (shuffle; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (shuffle)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11e_sd15_shuffle_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (canny; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (canny)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_canny_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (depth; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (depth)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11f1p_sd15_depth_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (inpaint; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (inpaint)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_inpaint_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (lineart; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (lineart)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_lineart_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (mlsd; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (mlsd)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_mlsd_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (normalbae; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (normalbae)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_normalbae_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (openpose; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (openpose)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_openpose_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (scribble; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (scribble)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_scribble_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_scribble_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (seg; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (seg)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_seg_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_seg_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (softedge; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (softedge)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15_softedge_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_softedge_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (anime; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (anime)", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11p_sd15s2_lineart_anime_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (tile; fp16; v11u)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (tile) / v11u", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11u_sd15_tile_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11u_sd15_tile_fp16.safetensors" + }, + { + "name": "ControlNet-v1-1 (tile; fp16; v11f1e)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (tile) / v11f1e\nYou need to this model for Tiled Resample", + "reference": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors", + "filename": "control_v11f1e_sd15_tile_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors" + }, + { + "name": "ControlNet-HandRefiner-pruned (inpaint-depth-hand; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "This inpaint-depth controlnet model is specialized for the hand refiner.", + "reference": "https://huggingface.co/hr16/ControlNet-HandRefiner-pruned", + "filename": "control_sd15_inpaint_depth_hand_fp16.safetensors", + "url": "https://huggingface.co/hr16/ControlNet-HandRefiner-pruned/resolve/main/control_sd15_inpaint_depth_hand_fp16.safetensors" + }, + { + "name": "control_boxdepth_LooseControlfp16 (fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Loose ControlNet model", + "reference": "https://huggingface.co/ioclab/LooseControl_WebUICombine", + "filename": "control_boxdepth_LooseControlfp16.safetensors", + "url": "https://huggingface.co/ioclab/LooseControl_WebUICombine/resolve/main/control_boxdepth_LooseControlfp16.safetensors" + }, + { + "name": "GLIGEN textbox (fp16; pruned)", + "type": "gligen", + "base": "SD1.5", + "save_path": "default", + "description": "GLIGEN textbox model", + "reference": "https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors", + "filename": "gligen_sd14_textbox_pruned_fp16.safetensors", + "url": "https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors/resolve/main/gligen_sd14_textbox_pruned_fp16.safetensors" + }, + { + "name": "ViT-H SAM model", + "type": "sam", + "base": "SAM", + "save_path": "sams", + "description": "Segmenty Anything SAM model (ViT-H)", + "reference": "https://github.com/facebookresearch/segment-anything#model-checkpoints", + "filename": "sam_vit_h_4b8939.pth", + "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth" + }, + { + "name": "ViT-L SAM model", + "type": "sam", + "base": "SAM", + "save_path": "sams", + "description": "Segmenty Anything SAM model (ViT-L)", + "reference": "https://github.com/facebookresearch/segment-anything#model-checkpoints", + "filename": "sam_vit_l_0b3195.pth", + "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth" + }, + { + "name": "ViT-B SAM model", + "type": "sam", + "base": "SAM", + "save_path": "sams", + "description": "Segmenty Anything SAM model (ViT-B)", + "reference": "https://github.com/facebookresearch/segment-anything#model-checkpoints", + "filename": "sam_vit_b_01ec64.pth", + "url": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth" + }, + { + "name": "seecoder v1.0", + "type": "seecoder", + "base": "SEECODER", + "save_path": "seecoders", + "description": "SeeCoder model", + "reference": "https://huggingface.co/shi-labs/prompt-free-diffusion/tree/main/pretrained/pfd/seecoder", + "filename": "seecoder-v1-0.safetensors", + "url": "https://huggingface.co/shi-labs/prompt-free-diffusion/resolve/main/pretrained/pfd/seecoder/seecoder-v1-0.safetensors" + }, + { + "name": "seecoder pa v1.0", + "type": "seecoder", + "base": "SEECODER", + "save_path": "seecoders", + "description": "SeeCoder model", + "reference": "https://huggingface.co/shi-labs/prompt-free-diffusion/tree/main/pretrained/pfd/seecoder", + "filename": "seecoder-pa-v1-0.safetensors", + "url": "https://huggingface.co/shi-labs/prompt-free-diffusion/resolve/main/pretrained/pfd/seecoder/seecoder-pa-v1-0.safetensors" + }, + { + "name": "seecoder anime v1.0", + "type": "seecoder", + "base": "SEECODER", + "save_path": "seecoders", + "description": "SeeCoder model", + "reference": "https://huggingface.co/shi-labs/prompt-free-diffusion/tree/main/pretrained/pfd/seecoder", + "filename": "seecoder-anime-v1-0.safetensors", + "url": "https://huggingface.co/shi-labs/prompt-free-diffusion/resolve/main/pretrained/pfd/seecoder/seecoder-anime-v1-0.safetensors" + }, + { + "name": "face_yolov8m (bbox)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/bbox", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "face_yolov8m.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8m.pt" + }, + { + "name": "face_yolov8n (bbox)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/bbox", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "face_yolov8n.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8n.pt" + }, + { + "name": "face_yolov8n_v2 (bbox)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/bbox", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "face_yolov8n_v2.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8n_v2.pt" + }, + { + "name": "face_yolov8s (bbox)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/bbox", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "face_yolov8s.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/face_yolov8s.pt" + }, + { + "name": "hand_yolov8n (bbox)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/bbox", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "hand_yolov8n.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/hand_yolov8n.pt" + }, + { + "name": "hand_yolov8s (bbox)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/bbox", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "hand_yolov8s.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/hand_yolov8s.pt" + }, + { + "name": "person_yolov8m (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "person_yolov8m-seg.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8m-seg.pt" + }, + { + "name": "person_yolov8n (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "person_yolov8n-seg.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8n-seg.pt" + }, + { + "name": "person_yolov8s (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "person_yolov8s-seg.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/person_yolov8s-seg.pt" + }, + { + "name": "deepfashion2_yolov8s (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://huggingface.co/Bingsu/adetailer/tree/main", + "filename": "deepfashion2_yolov8s-seg.pt", + "url": "https://huggingface.co/Bingsu/adetailer/resolve/main/deepfashion2_yolov8s-seg.pt" + }, + + { + "name": "face_yolov8m-seg_60.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "face_yolov8m-seg_60.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/face_yolov8m-seg_60.pt" + }, + { + "name": "face_yolov8n-seg2_60.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "face_yolov8n-seg2_60.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/face_yolov8n-seg2_60.pt" + }, + { + "name": "hair_yolov8n-seg_60.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "hair_yolov8n-seg_60.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/hair_yolov8n-seg_60.pt" + }, + { + "name": "skin_yolov8m-seg_400.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "skin_yolov8m-seg_400.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8m-seg_400.pt" + }, + { + "name": "skin_yolov8n-seg_400.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "skin_yolov8n-seg_400.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8n-seg_400.pt" + }, + { + "name": "skin_yolov8n-seg_800.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "skin_yolov8n-seg_800.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8n-seg_800.pt" + }, + + { + "name": "animatediff/mmd_sd_v14.ckpt (comfyui-animatediff) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "AnimateDiff", + "description": "Pressing 'install' directly downloads the model from the ArtVentureX/AnimateDiff extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sd_v14.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt" + }, + { + "name": "animatediff/mm_sd_v15.ckpt (comfyui-animatediff) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "AnimateDiff", + "description": "Pressing 'install' directly downloads the model from the ArtVentureX/AnimateDiff extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sd_v15.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt" + }, + + { + "name": "animatediff/mmd_sd_v14.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sd_v14.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v14.ckpt" + }, + { + "name": "animatediff/mm_sd_v15.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sd_v15.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15.ckpt" + }, + { + "name": "animatediff/mm_sd_v15_v2.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sd_v15_v2.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt" + }, + { + "name": "animatediff/v3_sd15_mm.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_mm.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_mm.ckpt" + }, + + { + "name": "animatediff/mm_sdxl_v10_beta.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SDXL", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sdxl_v10_beta.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sdxl_v10_beta.ckpt" + }, + { + "name": "AD_Stabilized_Motion/mm-Stabilized_high.pth (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/manshoety/AD_Stabilized_Motion", + "filename": "mm-Stabilized_high.pth", + "url": "https://huggingface.co/manshoety/AD_Stabilized_Motion/resolve/main/mm-Stabilized_high.pth" + }, + { + "name": "AD_Stabilized_Motion/mm-Stabilized_mid.pth (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/manshoety/AD_Stabilized_Motion", + "filename": "mm-Stabilized_mid.pth", + "url": "https://huggingface.co/manshoety/AD_Stabilized_Motion/resolve/main/mm-Stabilized_mid.pth" + }, + { + "name": "CiaraRowles/temporaldiff-v1-animatediff.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/CiaraRowles/TemporalDiff", + "filename": "temporaldiff-v1-animatediff.ckpt", + "url": "https://huggingface.co/CiaraRowles/TemporalDiff/resolve/main/temporaldiff-v1-animatediff.ckpt" + }, + + { + "name": "animatediff/v2_lora_PanLeft.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_PanLeft.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_PanLeft.ckpt" + }, + { + "name": "animatediff/v2_lora_PanRight.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_PanRight.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_PanRight.ckpt" + }, + { + "name": "animatediff/v2_lora_RollingAnticlockwise.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_RollingAnticlockwise.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_RollingAnticlockwise.ckpt" + }, + { + "name": "animatediff/v2_lora_RollingClockwise.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_RollingClockwise.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_RollingClockwise.ckpt" + }, + { + "name": "animatediff/v2_lora_TiltDown.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_TiltDown.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltDown.ckpt" + }, + { + "name": "animatediff/v2_lora_TiltUp.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_TiltUp.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltUp.ckpt" + }, + { + "name": "animatediff/v2_lora_ZoomIn.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_ZoomIn.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_ZoomIn.ckpt" + }, + { + "name": "animatediff/v2_lora_ZoomOut.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_ZoomOut.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_ZoomOut.ckpt" + }, + { + "name": "LongAnimatediff/lt_long_mm_32_frames.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "filename": "lt_long_mm_32_frames.ckpt", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_32_frames.ckpt" + }, + { + "name": "LongAnimatediff/lt_long_mm_16_64_frames.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "filename": "lt_long_mm_16_64_frames.ckpt", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_16_64_frames.ckpt" + }, + { + "name": "LongAnimatediff/lt_long_mm_16_64_frames_v1.1.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "filename": "lt_long_mm_16_64_frames_v1.1.ckpt", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_16_64_frames_v1.1.ckpt" + }, + + + { + "name": "animatediff/v3_sd15_sparsectrl_rgb.ckpt (ComfyUI-AnimateDiff-Evolved)", + "type": "controlnet", + "base": "SD1.x", + "save_path": "controlnet/SD1.5/animatediff", + "description": "AnimateDiff SparseCtrl RGB ControlNet model", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_sparsectrl_rgb.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_sparsectrl_rgb.ckpt" + }, + { + "name": "animatediff/v3_sd15_sparsectrl_scribble.ckpt", + "type": "controlnet", + "base": "SD1.x", + "save_path": "controlnet/SD1.5/animatediff", + "description": "AnimateDiff SparseCtrl Scribble ControlNet model", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_sparsectrl_scribble.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_sparsectrl_scribble.ckpt" + }, + { + "name": "animatediff/v3_sd15_adapter.ckpt", + "type": "lora", + "base": "SD1.x", + "save_path": "loras/SD1.5/animatediff", + "description": "AnimateDiff Adapter LoRA (SD1.5)", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_adapter.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_adapter.ckpt" + }, + + { + "name": "TencentARC/motionctrl.pth", + "type": "checkpoints", + "base": "MotionCtrl", + "save_path": "checkpoints/motionctrl", + "description": "To use the ComfyUI-MotionCtrl extension, downloading this model is required.", + "reference": "https://huggingface.co/TencentARC/MotionCtrl", + "filename": "motionctrl.pth", + "url": "https://huggingface.co/TencentARC/MotionCtrl/resolve/main/motionctrl.pth" + }, + + { + "name": "ip-adapter_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15.safetensors" + }, + { + "name": "ip-adapter_sd15_light.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sd15_light.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15_light.safetensors" + }, + { + "name": "ip-adapter_sd15_vit-G.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sd15_vit-G.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15_vit-G.safetensors" + }, + { + "name": "ip-adapter-plus_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus_sd15.safetensors" + }, + { + "name": "ip-adapter-plus-face_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus-face_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus-face_sd15.safetensors" + }, + { + "name": "ip-adapter-full-face_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-full-face_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-full-face_sd15.safetensors" + }, + { + "name": "ip-adapter-faceid_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sd15.bin" + }, + { + "name": "ip-adapter-faceid-plus_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Plus Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plus_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plus_sd15.bin" + }, + { + "name": "ip-adapter-faceid-portrait_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Portrait Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-portrait_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-portrait_sd15.bin" + }, + { + "name": "ip-adapter-faceid_sdxl.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sdxl.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sdxl.bin" + }, + { + "name": "ip-adapter-faceid-plusv2_sdxl.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Plus Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sdxl.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sdxl.bin" + }, + { + "name": "ip-adapter-faceid_sd15_lora.safetensors", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID LoRA Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sd15_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sd15_lora.safetensors" + }, + { + "name": "ip-adapter-faceid-plus_sd15_lora.safetensors", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID Plus LoRA Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plus_sd15_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plus_sd15_lora.safetensors" + }, + { + "name": "ip-adapter-faceid-plusv2_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID-Plus V2 Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sd15.bin" + }, + { + "name": "ip-adapter-faceid-plusv2_sd15_lora.safetensors", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID-Plus V2 LoRA Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sd15_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sd15_lora.safetensors" + }, + { + "name": "ip-adapter-faceid_sdxl_lora.safetensors", + "type": "lora", + "base": "SDXL", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID LoRA Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sdxl_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sdxl_lora.safetensors" + }, + { + "name": "ip-adapter-faceid-plusv2_sdxl_lora.safetensors", + "type": "lora", + "base": "SDXL", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID-Plus V2 LoRA Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sdxl_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sdxl_lora.safetensors" + }, + { + "name": "ip-adapter_sdxl.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sdxl.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.safetensors" + }, + { + "name": "ip-adapter_sdxl_vit-h.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sdxl_vit-h.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl_vit-h.safetensors" + }, + { + "name": "ip-adapter-plus_sdxl_vit-h.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus_sdxl_vit-h.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus_sdxl_vit-h.safetensors" + }, + { + "name": "ip-adapter-plus-face_sdxl_vit-h.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus-face_sdxl_vit-h.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus-face_sdxl_vit-h.safetensors" + }, + + { + "name": "pfg-novel-n10.pt", + "type": "PFG", + "base": "SD1.5", + "save_path": "custom_nodes/pfg-ComfyUI/models", + "description": "Pressing 'install' directly downloads the model from the pfg-ComfyUI/models extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "reference": "https://huggingface.co/furusu/PFG", + "filename": "pfg-novel-n10.pt", + "url": "https://huggingface.co/furusu/PFG/resolve/main/pfg-novel-n10.pt" + }, + { + "name": "pfg-wd14-n10.pt", + "type": "PFG", + "base": "SD1.5", + "save_path": "custom_nodes/pfg-ComfyUI/models", + "description": "Pressing 'install' directly downloads the model from the pfg-ComfyUI/models extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "reference": "https://huggingface.co/furusu/PFG", + "filename": "pfg-wd14-n10.pt", + "url": "https://huggingface.co/furusu/PFG/resolve/main/pfg-wd14-n10.pt" + }, + { + "name": "pfg-wd15beta2-n10.pt", + "type": "PFG", + "base": "SD1.5", + "save_path": "custom_nodes/pfg-ComfyUI/models", + "description": "Pressing 'install' directly downloads the model from the pfg-ComfyUI/models extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "reference": "https://huggingface.co/furusu/PFG", + "filename": "pfg-wd15beta2-n10.pt", + "url": "https://huggingface.co/furusu/PFG/resolve/main/pfg-wd15beta2-n10.pt" + }, + { + "name": "GFPGANv1.4.pth", + "type": "GFPGAN", + "base": "GFPGAN", + "save_path": "facerestore_models", + "description": "Face Restoration Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "reference": "https://github.com/TencentARC/GFPGAN/releases", + "filename": "GFPGANv1.4.pth", + "url": "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" + }, + { + "name": "codeformer.pth", + "type": "CodeFormer", + "base": "CodeFormer", + "save_path": "facerestore_models", + "description": "Face Restoration Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "reference": "https://github.com/sczhou/CodeFormer/releases", + "filename": "codeformer.pth", + "url": "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth" + }, + { + "name": "detection_Resnet50_Final.pth", + "type": "facexlib", + "base": "facexlib", + "save_path": "facerestore_models", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "reference": "https://github.com/xinntao/facexlib", + "filename": "detection_Resnet50_Final.pth", + "url": "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" + }, + { + "name": "detection_mobilenet0.25_Final.pth", + "type": "facexlib", + "base": "facexlib", + "save_path": "facerestore_models", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "reference": "https://github.com/xinntao/facexlib", + "filename": "detection_mobilenet0.25_Final.pth", + "url": "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_mobilenet0.25_Final.pth" + }, + { + "name": "yolov5l-face.pth", + "type": "facexlib", + "base": "facexlib", + "save_path": "facedetection", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "reference": "https://github.com/xinntao/facexlib", + "filename": "yolov5l-face.pth", + "url": "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth" + }, + { + "name": "yolov5n-face.pth", + "type": "facexlib", + "base": "facexlib", + "save_path": "facedetection", + "description": "Face Detection Models. Download the model required for using the 'Facerestore CF (Code Former)' custom node.", + "reference": "https://github.com/xinntao/facexlib", + "filename": "yolov5n-face.pth", + "url": "https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5n-face.pth" + }, + { + "name": "photomaker-v1.bin", + "type": "photomaker", + "base": "SDXL", + "save_path": "photomaker", + "description": "PhotoMaker model. This model is compatible with SDXL.", + "reference": "https://huggingface.co/TencentARC/PhotoMaker", + "filename": "photomaker-v1.bin", + "url": "https://huggingface.co/TencentARC/PhotoMaker/resolve/main/photomaker-v1.bin" + }, + { + "name": "1k3d68.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 1k3d68.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "1k3d68.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/1k3d68.onnx" + }, + { + "name": "2d106det.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 2d106det.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "2d106det.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/2d106det.onnx" + }, + { + "name": "genderage.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 genderage.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "genderage.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/genderage.onnx" + }, + { + "name": "glintr100.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 glintr100.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "glintr100.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/glintr100.onnx" + }, + { + "name": "scrfd_10g_bnkps.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 scrfd_10g_bnkps.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "scrfd_10g_bnkps.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/scrfd_10g_bnkps.onnx" + }, + { + "name": "ip-adapter.bin", + "type": "instantid", + "base": "SDXL", + "save_path": "instantid", + "description": "InstantId main model based on IpAdapter", + "reference": "https://huggingface.co/InstantX/InstantID", + "filename": "ip-adapter.bin", + "url": "https://huggingface.co/InstantX/InstantID/resolve/main/ip-adapter.bin" + }, + { + "name": "diffusion_pytorch_model.safetensors", + "type": "controlnet", + "base": "SDXL", + "save_path": "controlnet/instantid", + "description": "InstantId controlnet model", + "reference": "https://huggingface.co/InstantX/InstantID", + "filename": "diffusion_pytorch_model.safetensors", + "url": "https://huggingface.co/InstantX/InstantID/resolve/main/ControlNetModel/diffusion_pytorch_model.safetensors" + } + ] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/dev/custom-node-list.json b/custom_nodes/ComfyUI-Manager/node_db/dev/custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..24b8a23a1ae986aa55684d2aa8a0106545123ce8 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/dev/custom-node-list.json @@ -0,0 +1,494 @@ +{ + "custom_nodes": [ + { + "author": "#NOTICE_1.13", + "title": "NOTICE: This channel is not the default channel.", + "reference": "https://github.com/ltdrdata/ComfyUI-Manager", + "files": [], + "install_type": "git-clone", + "description": "If you see this message, your ComfyUI-Manager is outdated.\nDev channel provides only the list of the developing nodes. If you want to find the complete node list, please go to the Default channel." + }, + + + { + "author": "shadowcz007", + "title": "comfyui-musicgen", + "reference": "https://github.com/shadowcz007/comfyui-musicgen", + "files": [ + "https://github.com/shadowcz007/comfyui-musicgen" + ], + "install_type": "git-clone", + "description": "Nodes:Musicgen" + }, + { + "author": "Extraltodeus", + "title": "ComfyUI-variableCFGandAntiBurn [WIP]", + "reference": "https://github.com/Extraltodeus/ComfyUI-variableCFGandAntiBurn", + "files": [ + "https://github.com/Extraltodeus/ComfyUI-variableCFGandAntiBurn" + ], + "install_type": "git-clone", + "description": "Nodes:Continuous CFG rescaler (pre CFG), Intermediary latent merge (post CFG), Intensity/Brightness limiter (post CFG), Dynamic renoising (post CFG), Automatic CFG scale (pre/post CFG), CFG multiplier per channel (pre CFG), Self-Attention Guidance delayed activation mod (post CFG)" + }, + { + "author": "shadowcz007", + "title": "comfyui-CLIPSeg", + "reference": "https://github.com/shadowcz007/comfyui-CLIPSeg", + "files": [ + "https://github.com/shadowcz007/comfyui-CLIPSeg" + ], + "install_type": "git-clone", + "description": "Download [a/CLIPSeg](https://huggingface.co/CIDAS/clipseg-rd64-refined/tree/main), move to : models/clipseg" + }, + { + "author": "dezi-ai", + "title": "ComfyUI Animate LCM", + "reference": "https://github.com/dezi-ai/ComfyUI-AnimateLCM", + "files": [ + "https://github.com/dezi-ai/ComfyUI-AnimateLCM" + ], + "install_type": "git-clone", + "description": "ComfyUI implementation for [a/AnimateLCM](https://animatelcm.github.io/) [[a/paper](https://arxiv.org/abs/2402.00769)].\b[w/This extension includes a large number of nodes imported from the existing custom nodes, increasing the likelihood of conflicts.]" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-BRIA_AI-RMBG", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-BRIA_AI-RMBG", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-BRIA_AI-RMBG" + ], + "install_type": "git-clone", + "description": "Unofficial [a/BRIA Background Removal v1.4](https://huggingface.co/briaai/RMBG-1.4) of BRIA RMBG Model for ComfyUI" + }, + { + "author": "stutya", + "title": "ComfyUI-Terminal [UNSAFE]", + "reference": "https://github.com/stutya/ComfyUI-Terminal", + "files": [ + "https://github.com/stutya/ComfyUI-Terminal" + ], + "install_type": "git-clone", + "description": "Run Terminal Commands from ComfyUI.\n[w/This extension poses a risk of executing arbitrary commands through workflow execution. Please be cautious.]" + }, + { + "author": "marcueberall", + "title": "ComfyUI-BuildPath", + "reference": "https://github.com/marcueberall/ComfyUI-BuildPath", + "files": [ + "https://github.com/marcueberall/ComfyUI-BuildPath" + ], + "install_type": "git-clone", + "description": "Nodes: Build Path Adv." + }, + { + "author": "LotzF", + "title": "ComfyUI simple ChatGPT completion [UNSAFE]", + "reference": "https://github.com/LotzF/ComfyUI-Simple-Chat-GPT-completion", + "files": [ + "https://github.com/LotzF/ComfyUI-Simple-Chat-GPT-completion" + ], + "install_type": "git-clone", + "description": "A simple node to request ChatGPT completions. [w/Do not share your workflows including the API key! I'll take no responsibility for your leaked keys.]" + }, + { + "author": "kappa54m", + "title": "ComfyUI_Usability (WIP)", + "reference": "https://github.com/kappa54m/ComfyUI_Usability", + "files": [ + "https://github.com/kappa54m/ComfyUI_Usability" + ], + "install_type": "git-clone", + "description": "Nodes: Load Image Dedup, Load Image By Path." + }, + { + "author": "17Retoucher", + "title": "ComfyUI_Fooocus", + "reference": "https://github.com/17Retoucher/ComfyUI_Fooocus", + "files": [ + "https://github.com/17Retoucher/ComfyUI_Fooocus" + ], + "install_type": "git-clone", + "description": "Custom nodes that help reproduce image generation in Fooocus." + }, + { + "author": "nkchocoai", + "title": "ComfyUI-PromptUtilities", + "reference": "https://github.com/nkchocoai/ComfyUI-PromptUtilities", + "files": [ + "https://github.com/nkchocoai/ComfyUI-PromptUtilities" + ], + "install_type": "git-clone", + "description": "Nodes: Format String, Join String List, Load Preset, Load Preset (Advanced), Const String, Const String (multi line). Add useful nodes related to prompt." + }, + { + "author": "BadCafeCode", + "title": "execution-inversion-demo-comfyui", + "reference": "https://github.com/BadCafeCode/execution-inversion-demo-comfyui", + "files": [ + "https://github.com/BadCafeCode/execution-inversion-demo-comfyui" + ], + "install_type": "git-clone", + "description": "execution-inversion-demo-comfyui" + }, + { + "author": "unanan", + "title": "ComfyUI-clip-interrogator [WIP]", + "reference": "https://github.com/unanan/ComfyUI-clip-interrogator", + "files": [ + "https://github.com/unanan/ComfyUI-clip-interrogator" + ], + "install_type": "git-clone", + "description": "Unofficial ComfyUI extension of clip-interrogator" + }, + { + "author": "prismwastaken", + "title": "prism-tools", + "reference": "https://github.com/prismwastaken/comfyui-tools", + "files": [ + "https://github.com/prismwastaken/comfyui-tools" + ], + "install_type": "git-clone", + "description": "prism-tools" + }, + { + "author": "poisenbery", + "title": "NudeNet-Detector-Provider [WIP]", + "reference": "https://github.com/poisenbery/NudeNet-Detector-Provider", + "files": [ + "https://github.com/poisenbery/NudeNet-Detector-Provider" + ], + "install_type": "git-clone", + "description": "BBOX Detector Provider for NudeNet. Bethesda version of NudeNet V3 detector provider to work with Impact Pack ComfyUI." + }, + { + "author": "LarryJane491", + "title": "ComfyUI-ModelUnloader", + "reference": "https://github.com/LarryJane491/ComfyUI-ModelUnloader", + "files": [ + "https://github.com/LarryJane491/ComfyUI-ModelUnloader" + ], + "install_type": "git-clone", + "description": "A simple custom node that unloads all models. Useful for developers or users who want to free some memory." + }, + { + "author": "AIGODLIKE", + "title": "AIGODLIKE/ComfyUI-Model-Manager [WIP]", + "reference": "https://github.com/AIGODLIKE/ComfyUI-Studio", + "files": [ + "https://github.com/AIGODLIKE/ComfyUI-Studio" + ], + "install_type": "git-clone", + "description": "WIP" + }, + { + "author": "MrAdamBlack", + "title": "CheckProgress [WIP]", + "reference": "https://github.com/MrAdamBlack/CheckProgress", + "files": [ + "https://github.com/MrAdamBlack/CheckProgress" + ], + "install_type": "git-clone", + "description": "I was looking for a node to put in place to ensure my prompt etc where going as expected before the rest of the flow executed. To end the session, I just return the input image as None (see expected error). Recommend using it alongside PreviewImage, then output to the rest of the flow and Save Image." + }, + { + "author": "11cafe", + "title": "11cafe/ComfyUI Model Manager [WIP]", + "reference": "https://github.com/11cafe/model-manager-comfyui", + "files": [ + "https://github.com/11cafe/model-manager-comfyui" + ], + "install_type": "git-clone", + "description": "This answers the itch for being able to easily paste [a/CivitAI.com](CivitAI.com) generated data (or other simple metadata) into Comfy in a way that makes it easy to test with multiple checkpoints." + }, + { + "author": "birnam", + "title": "Gen Data Tester [WIP]", + "reference": "https://github.com/birnam/ComfyUI-GenData-Pack", + "files": [ + "https://github.com/birnam/ComfyUI-GenData-Pack" + ], + "install_type": "git-clone", + "description": "This answers the itch for being able to easily paste [a/CivitAI.com](CivitAI.com) generated data (or other simple metadata) into Comfy in a way that makes it easy to test with multiple checkpoints." + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-AnyText(WIP)", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-AnyText", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-AnyText" + ], + "install_type": "git-clone", + "description": "Unofficial implementation of [a/AnyText](https://github.com/tyxsspa/AnyText/tree/825bcc54687206b15bd7e28ba1a8b095989d58e3) for ComfyUI(EXP)" + }, + { + "author": "nidefawl", + "title": "ComfyUI-nidefawl [UNSAFE]", + "reference": "https://github.com/nidefawl/ComfyUI-nidefawl", + "files": [ + "https://github.com/nidefawl/ComfyUI-nidefawl" + ], + "install_type": "git-clone", + "description": "Nodes:PythonScript, BlendImagesWithBoundedMasks, CropImagesWithMasks, VAELoaderDataType, ModelSamplerTonemapNoiseTest, gcLatentTunnel, ReferenceOnlySimple, EmptyImageWithColor, MaskFromColor, SetLatentCustomNoise, LatentToImage, ImageToLatent, LatentScaledNoise, DisplayAnyType, SamplerCustomCallback, CustomCallback, SplitCustomSigmas, SamplerDPMPP_2M_SDE_nidefawl, LatentPerlinNoise.
[w/This node is an unsafe node that includes the capability to execute arbitrary python script.]" + }, + { + "author": "kadirnar", + "title": "comfyui_helpers", + "reference": "https://github.com/kadirnar/comfyui_helpers", + "files": [ + "https://github.com/kadirnar/comfyui_helpers" + ], + "install_type": "git-clone", + "description": "A collection of nodes randomly selected and gathered, related to noise. NOTE: SD-Advanced-Noise, noise_latent_perlinpinpin, comfy-plasma" + }, + { + "author": "foglerek", + "title": "comfyui-cem-tools", + "reference": "https://github.com/foglerek/comfyui-cem-tools", + "files": [ + "https://github.com/foglerek/comfyui-cem-tools" + ], + "install_type": "git-clone", + "description": "Nodes:ProcessImageBatch" + }, + { + "author": "komojini", + "title": "ComfyUI_Prompt_Template_CustomNodes", + "reference": "https://github.com/komojini/ComfyUI_Prompt_Template_CustomNodes", + "files": [ + "https://github.com/komojini/ComfyUI_Prompt_Template_CustomNodes/raw/main/prompt_with_template.py" + ], + "install_type": "copy", + "description": "Nodes:Prompt with Template" + }, + { + "author": "talesofai", + "title": "comfyui-supersave [WIP]", + "reference": "https://github.com/talesofai/comfyui-supersave", + "files": [ + "https://github.com/talesofai/comfyui-supersave" + ], + "install_type": "git-clone", + "description": "WIP" + }, + { + "author": "Sai-ComfyUI", + "title": "ComfyUI-MS-Nodes [WIP]", + "reference": "https://github.com/Sai-ComfyUI/ComfyUI-MS-Nodes", + "files": [ + "https://github.com/Sai-ComfyUI/ComfyUI-MS-Nodes" + ], + "install_type": "git-clone", + "description": "WIP" + }, + { + "author": "eigenpunk", + "title": "ComfyUI-audio", + "reference": "https://github.com/eigenpunk/ComfyUI-audio", + "files": [ + "https://github.com/eigenpunk/ComfyUI-audio" + ], + "install_type": "git-clone", + "description": "generative audio tools for ComfyUI. highly experimental-expect things to break." + }, + { + "author": "Jaxkr", + "title": "comfyui-terminal-command [UNSAFE]", + "reference": "https://github.com/Jaxkr/comfyui-terminal-command", + "files": [ + "https://github.com/Jaxkr/comfyui-terminal-command" + ], + "install_type": "git-clone", + "description": "Nodes: Run Terminal Command. [w/This node is an unsafe node that includes the capability to execute terminal commands.]" + }, + { + "author": "BlueDangerX", + "title": "ComfyUI-BDXNodes [WIP]", + "reference": "https://github.com/BlueDangerX/ComfyUI-BDXNodes", + "files": [ + "https://github.com/BlueDangerX/ComfyUI-BDXNodes" + ], + "install_type": "git-clone", + "description": "Nodes: Node Jumper. Various quality of life testing nodes" + }, + { + "author": "ilovejohnwhite", + "title": "TatToolkit", + "reference": "https://github.com/ilovejohnwhite/UncleBillyGoncho", + "files": [ + "https://github.com/ilovejohnwhite/UncleBillyGoncho" + ], + "install_type": "git-clone", + "description": "Nodes:UWU TTK Preprocessor, Pixel Perfect Resolution, Generation Resolution From Image, Generation Resolution From Latent, Enchance And Resize Hint Images, ..." + }, + { + "author": "IvanZhd", + "title": "comfyui-codeformer [WIP]", + "reference": "https://github.com/IvanZhd/comfyui-codeformer", + "files": [ + "https://github.com/IvanZhd/comfyui-codeformer" + ], + "install_type": "git-clone", + "description": "Nodes:Image Inverter" + }, + { + "author": "alt-key-project", + "title": "Dream Project Video Batches [WIP]", + "reference": "https://github.com/alt-key-project/comfyui-dream-video-batches", + "files": [ + "https://github.com/alt-key-project/comfyui-dream-video-batches" + ], + "install_type": "git-clone", + "description": "NOTE: This is currently work in progress. Expect nodes to break (or be broken) until 1.0 release." + }, + { + "author": "oyvindg", + "title": "ComfyUI-TrollSuite", + "reference": "https://github.com/oyvindg/ComfyUI-TrollSuite", + "files": [ + "https://github.com/oyvindg/ComfyUI-TrollSuite" + ], + "install_type": "git-clone", + "description": "Nodes: BinaryImageMask, ImagePadding, LoadLastCreatedImage, RandomMask, TransparentImage." + }, + { + "author": "romeobuilderotti", + "title": "ComfyUI-EZ-Pipes", + "reference": "https://github.com/romeobuilderotti/ComfyUI-EZ-Pipes", + "files": [ + "https://github.com/romeobuilderotti/ComfyUI-EZ-Pipes" + ], + "install_type": "git-clone", + "description": "ComfyUI-EZ-Pipes is a set of custom pipe nodes for ComfyUI. It provides a set of Input/Edit/Output nodes for each pipe type." + }, + { + "author": "wormley", + "title": "comfyui-wormley-nodes", + "reference": "https://github.com/wormley/comfyui-wormley-nodes", + "files": [ + "https://github.com/wormley/comfyui-wormley-nodes" + ], + "install_type": "git-clone", + "description": "Nodes: CheckpointVAELoaderSimpleText, CheckpointVAESelectorText, LoRA_Tag_To_Stack" + }, + { + "author": "dnl13", + "title": "ComfyUI-dnl13-seg", + "reference": "https://github.com/dnl13/ComfyUI-dnl13-seg", + "files": [ + "https://github.com/dnl13/ComfyUI-dnl13-seg" + ], + "install_type": "git-clone", + "description": "After discovering @storyicon implementation here of Segment Anything, I realized its potential as a powerful tool for ComfyUI if implemented correctly. I delved into the SAM and Dino models. The following is my own adaptation of sam_hq for ComfyUI." + }, + { + "author": "phineas-pta", + "title": "comfy-trt-test [WIP]", + "reference": "https://github.com/phineas-pta/comfy-trt-test", + "files": [ + "https://github.com/phineas-pta/comfy-trt-test" + ], + "install_type": "git-clone", + "description": "Test project for ComfyUI TensorRT Support.\nNOT WORKING YET.\nnot automatic yet, do not use ComfyUI-Manager to install !!!.\nnot beginner-friendly yet, still intended to technical users\nNOTE: The reason for registration in the Manager is for guidance, and for detailed installation instructions, please visit the repository." + }, + { + "author": "Brandelan", + "title": "ComfyUI_bd_customNodes", + "reference": "https://github.com/Brandelan/ComfyUI_bd_customNodes", + "files": [ + "https://github.com/Brandelan/ComfyUI_bd_customNodes" + ], + "install_type": "git-clone", + "description": "Nodes: BD Random Range, BD Settings, BD Sequencer." + }, + { + "author": "Jordach", + "title": "comfy-consistency-vae", + "reference": "https://github.com/Jordach/comfy-consistency-vae", + "files": [ + "https://github.com/Jordach/comfy-consistency-vae" + ], + "install_type": "git-clone", + "description": "Nodes: Comfy_ConsistencyVAE" + }, + { + "author": "gameltb", + "title": "ComfyUI_stable_fast", + "reference": "https://github.com/gameltb/ComfyUI_stable_fast", + "files": [ + "https://github.com/gameltb/ComfyUI_stable_fast" + ], + "install_type": "git-clone", + "description": "Nodes:ApplyStableFastUnet. Experimental usage of stable-fast." + }, + { + "author": "jn-jairo", + "title": "jn_node_suite_comfyui [WIP]", + "reference": "https://github.com/jn-jairo/jn_node_suite_comfyui", + "files": [ + "https://github.com/jn-jairo/jn_node_suite_comfyui" + ], + "install_type": "git-clone", + "description": "Image manipulation nodes, Temperature control nodes, Tiling nodes, Primitive and operation nodes, ..." + }, + { + "author": "PluMaZero", + "title": "ComfyUI-SpaceFlower", + "reference": "https://github.com/PluMaZero/ComfyUI-SpaceFlower", + "files": [ + "https://github.com/PluMaZero/ComfyUI-SpaceFlower" + ], + "install_type": "git-clone", + "description": "Nodes: SpaceFlower_Prompt, SpaceFlower_HangulPrompt, ..." + }, + { + "author": "laksjdjf", + "title": "ssd-1b-comfyui", + "reference": "https://github.com/laksjdjf/ssd-1b-comfyui", + "files": [ + "https://github.com/laksjdjf/ssd-1b-comfyui" + ], + "install_type": "git-clone", + "description": "Experimental node for SSD-1B. This node is not need for latest comfyui." + }, + { + "author": "flowtyone", + "title": "comfyui-flowty-lcm", + "reference": "https://github.com/flowtyone/comfyui-flowty-lcm", + "files": [ + "https://github.com/flowtyone/comfyui-flowty-lcm" + ], + "install_type": "git-clone", + "description": "This is a comfyui early testing node for LCM, adapted from [a/https://github.com/0xbitches/sd-webui-lcm](https://github.com/0xbitches/sd-webui-lcm). It uses the diffusers backend unfortunately and not comfy's model loading mechanism. But the intention here is just to be able to execute lcm inside comfy.\nNOTE: 0xbitches's 'Latent Consistency Model for ComfyUI' is original implementation." + }, + { + "author": "doucx", + "title": "ComfyUI_WcpD_Utility_Kit", + "reference": "https://github.com/doucx/ComfyUI_WcpD_Utility_Kit", + "files": [ + "https://github.com/doucx/ComfyUI_WcpD_Utility_Kit" + ], + "install_type": "git-clone", + "description": "Nodes: MergeStrings, ExecStrAsCode, RandnLatentImage. [w/NOTE: This extension includes the ability to execute code as a string in nodes. Be cautious during installation, as it can pose a security risk.]" + }, + { + "author": "WSJUSA", + "title": "pre-comfyui-stablsr", + "reference": "https://github.com/WSJUSA/Comfyui-StableSR", + "files": [ + "https://github.com/WSJUSA/Comfyui-StableSR" + ], + "install_type": "git-clone", + "description": "This is a development respository for debugging migration of StableSR to Comfyui" + }, + { + "author": "Dr.Lt.Data", + "title": "ComfyUI-Workflow-Component [WIP]", + "reference": "https://github.com/ltdrdata/ComfyUI-Workflow-Component", + "files": [ + "https://github.com/ltdrdata/ComfyUI-Workflow-Component" + ], + "install_type": "git-clone", + "description": "This extension provides the capability to use ComfyUI Workflow as a component and the ability to use the Image Refiner functionality based on components. NOTE: This is an experimental extension feature with no consideration for backward compatibility and can be highly unstable." + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/dev/extension-node-map.json b/custom_nodes/ComfyUI-Manager/node_db/dev/extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..4534fe2445cd300f34d114ff0546ec4f2412e8d7 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/dev/extension-node-map.json @@ -0,0 +1,890 @@ +{ + "https://github.com/17Retoucher/ComfyUI_Fooocus": [ + [ + "BasicScheduler", + "CLIPLoader", + "CLIPMergeSimple", + "CLIPSave", + "CLIPSetLastLayer", + "CLIPTextEncode", + "CLIPTextEncodeSDXL", + "CLIPTextEncodeSDXLRefiner", + "CLIPVisionEncode", + "CLIPVisionLoader", + "Canny", + "CheckpointLoader", + "CheckpointLoaderSimple", + "CheckpointSave", + "ConditioningAverage", + "ConditioningCombine", + "ConditioningConcat", + "ConditioningSetArea", + "ConditioningSetAreaPercentage", + "ConditioningSetMask", + "ConditioningSetTimestepRange", + "ConditioningZeroOut", + "ControlNetApply", + "ControlNetApplyAdvanced", + "ControlNetLoader", + "CropMask", + "DiffControlNetLoader", + "DiffusersLoader", + "DualCLIPLoader", + "EmptyImage", + "EmptyLatentImage", + "ExponentialScheduler", + "FeatherMask", + "FlipSigmas", + "Fooocus Controlnet", + "Fooocus Hirefix", + "Fooocus KSampler", + "Fooocus Loader", + "Fooocus LoraStack", + "Fooocus PreKSampler", + "Fooocus negative", + "Fooocus positive", + "Fooocus stylesSelector", + "FreeU", + "FreeU_V2", + "GLIGENLoader", + "GLIGENTextBoxApply", + "GrowMask", + "HyperTile", + "HypernetworkLoader", + "ImageBatch", + "ImageBlend", + "ImageBlur", + "ImageColorToMask", + "ImageCompositeMasked", + "ImageCrop", + "ImageInvert", + "ImageOnlyCheckpointLoader", + "ImagePadForOutpaint", + "ImageQuantize", + "ImageScale", + "ImageScaleBy", + "ImageScaleToTotalPixels", + "ImageSharpen", + "ImageToMask", + "ImageUpscaleWithModel", + "InvertMask", + "JoinImageWithAlpha", + "KSampler", + "KSamplerAdvanced", + "KSamplerSelect", + "KarrasScheduler", + "LatentAdd", + "LatentBatch", + "LatentBlend", + "LatentComposite", + "LatentCompositeMasked", + "LatentCrop", + "LatentFlip", + "LatentFromBatch", + "LatentInterpolate", + "LatentMultiply", + "LatentRotate", + "LatentSubtract", + "LatentUpscale", + "LatentUpscaleBy", + "LoadImage", + "LoadImageMask", + "LoadLatent", + "LoraLoader", + "LoraLoaderModelOnly", + "MaskComposite", + "MaskToImage", + "ModelMergeAdd", + "ModelMergeBlocks", + "ModelMergeSimple", + "ModelMergeSubtract", + "ModelSamplingContinuousEDM", + "ModelSamplingDiscrete", + "PatchModelAddDownscale", + "PerpNeg", + "PolyexponentialScheduler", + "PorterDuffImageComposite", + "PreviewImage", + "RebatchImages", + "RebatchLatents", + "RepeatImageBatch", + "RepeatLatentBatch", + "RescaleCFG", + "SDTurboScheduler", + "SVD_img2vid_Conditioning", + "SamplerCustom", + "SamplerDPMPP_2M_SDE", + "SamplerDPMPP_SDE", + "SaveAnimatedPNG", + "SaveAnimatedWEBP", + "SaveImage", + "SaveLatent", + "SelfAttentionGuidance", + "SetLatentNoiseMask", + "SolidMask", + "SplitImageWithAlpha", + "SplitSigmas", + "StableZero123_Conditioning", + "StyleModelApply", + "StyleModelLoader", + "TomePatchModel", + "UNETLoader", + "UpscaleModelLoader", + "VAEDecode", + "VAEDecodeTiled", + "VAEEncode", + "VAEEncodeForInpaint", + "VAEEncodeTiled", + "VAELoader", + "VAESave", + "VPScheduler", + "VideoLinearCFGGuidance", + "unCLIPCheckpointLoader", + "unCLIPConditioning" + ], + { + "title_aux": "ComfyUI_Fooocus" + } + ], + "https://github.com/BadCafeCode/execution-inversion-demo-comfyui": [ + [ + "AccumulateNode", + "AccumulationGetItemNode", + "AccumulationGetLengthNode", + "AccumulationHeadNode", + "AccumulationSetItemNode", + "AccumulationTailNode", + "AccumulationToListNode", + "BoolOperationNode", + "ComponentInput", + "ComponentMetadata", + "ComponentOutput", + "DebugPrint", + "ExecutionBlocker", + "FloatConditions", + "ForLoopClose", + "ForLoopOpen", + "IntConditions", + "IntMathOperation", + "InversionDemoAdvancedPromptNode", + "InversionDemoFakeAdvancedPromptNode", + "InversionDemoLazyConditional", + "InversionDemoLazyIndexSwitch", + "InversionDemoLazyMixImages", + "InversionDemoLazySwitch", + "ListToAccumulationNode", + "MakeListNode", + "StringConditions", + "ToBoolNode", + "WhileLoopClose", + "WhileLoopOpen" + ], + { + "title_aux": "execution-inversion-demo-comfyui" + } + ], + "https://github.com/BlueDangerX/ComfyUI-BDXNodes": [ + [ + "BDXTestInt", + "ColorMatch", + "ColorToMask", + "ConditioningMultiCombine", + "ConditioningSetMaskAndCombine", + "ConditioningSetMaskAndCombine3", + "ConditioningSetMaskAndCombine4", + "ConditioningSetMaskAndCombine5", + "CreateAudioMask", + "CreateFadeMask", + "CreateFluidMask", + "CreateGradientMask", + "CreateTextMask", + "CrossFadeImages", + "EmptyLatentImagePresets", + "GrowMaskWithBlur", + "SomethingToString", + "VRAM_Debug" + ], + { + "author": "BlueDangerX", + "title": "BDXNodes", + "title_aux": "ComfyUI-BDXNodes [WIP]" + } + ], + "https://github.com/Brandelan/ComfyUI_bd_customNodes": [ + [ + "BD Random Range", + "BD Random Settings", + "BD Sequencer", + "BD Settings" + ], + { + "title_aux": "ComfyUI_bd_customNodes" + } + ], + "https://github.com/IvanZhd/comfyui-codeformer": [ + [ + "RedBeanie_CustomImageInverter" + ], + { + "title_aux": "comfyui-codeformer [WIP]" + } + ], + "https://github.com/Jaxkr/comfyui-terminal-command": [ + [ + "Terminal" + ], + { + "title_aux": "comfyui-terminal-command [UNSAFE]" + } + ], + "https://github.com/Jordach/comfy-consistency-vae": [ + [ + "Comfy_ConsistencyVAE" + ], + { + "title_aux": "comfy-consistency-vae" + } + ], + "https://github.com/LarryJane491/ComfyUI-ModelUnloader": [ + [ + "Model Unloader" + ], + { + "title_aux": "ComfyUI-ModelUnloader" + } + ], + "https://github.com/MrAdamBlack/CheckProgress": [ + [ + "CHECK_PROGRESS" + ], + { + "title_aux": "CheckProgress [WIP]" + } + ], + "https://github.com/PluMaZero/ComfyUI-SpaceFlower": [ + [ + "SpaceFlower_HangulPrompt", + "SpaceFlower_Prompt" + ], + { + "title_aux": "ComfyUI-SpaceFlower" + } + ], + "https://github.com/Sai-ComfyUI/ComfyUI-MS-Nodes": [ + [ + "FloatMath", + "MS_Boolean", + "MS_Float", + "MS_GenerateSeed", + "MS_NP_Vector3", + "PowerFractalCrossHatchNode", + "PowerFractalNoiseNode", + "VectorMath" + ], + { + "title_aux": "ComfyUI-MS-Nodes [WIP]" + } + ], + "https://github.com/WSJUSA/Comfyui-StableSR": [ + [ + "ColorFix", + "StableSRUpscalerPipe" + ], + { + "author": "WSJUSA", + "description": "This module enables StableSR in Comgfyui. Ported work of sd-webui-stablesr. Original work for Auotmaatic1111 version of this module and StableSR credit to LIightChaser and Jianyi Wang.", + "nickname": "StableSR", + "title": "StableSR", + "title_aux": "pre-comfyui-stablsr" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-AnyText": [ + [ + "AnyTextNode_Zho" + ], + { + "title_aux": "ComfyUI-AnyText\uff08WIP\uff09" + } + ], + "https://github.com/alt-key-project/comfyui-dream-video-batches": [ + [ + "Blended Transition [DVB]", + "Calculation [DVB]", + "Create Frame Set [DVB]", + "Divide [DVB]", + "Fade From Black [DVB]", + "Fade To Black [DVB]", + "Float Input [DVB]", + "For Each Done [DVB]", + "For Each Filename [DVB]", + "Frame Set Append [DVB]", + "Frame Set Frame Dimensions Scaled [DVB]", + "Frame Set Index Offset [DVB]", + "Frame Set Merger [DVB]", + "Frame Set Reindex [DVB]", + "Frame Set Repeat [DVB]", + "Frame Set Reverse [DVB]", + "Frame Set Split Beginning [DVB]", + "Frame Set Split End [DVB]", + "Frame Set Splitter [DVB]", + "Generate Inbetween Frames [DVB]", + "Int Input [DVB]", + "Linear Camera Pan [DVB]", + "Linear Camera Roll [DVB]", + "Linear Camera Zoom [DVB]", + "Load Image From Path [DVB]", + "Multiply [DVB]", + "Sine Camera Pan [DVB]", + "Sine Camera Roll [DVB]", + "Sine Camera Zoom [DVB]", + "String Input [DVB]", + "Text Input [DVB]", + "Trace Memory Allocation [DVB]", + "Unwrap Frame Set [DVB]" + ], + { + "title_aux": "Dream Project Video Batches [WIP]" + } + ], + "https://github.com/birnam/ComfyUI-GenData-Pack": [ + [ + "Checkpoint From String \ud83d\udc69\u200d\ud83d\udcbb", + "Checkpoint Rerouter \ud83d\udc69\u200d\ud83d\udcbb", + "Checkpoint Selector Stacker \ud83d\udc69\u200d\ud83d\udcbb", + "Checkpoint Selector \ud83d\udc69\u200d\ud83d\udcbb", + "Checkpoint to String \ud83d\udc69\u200d\ud83d\udcbb", + "Decode GenData \ud83d\udc69\u200d\ud83d\udcbb", + "Encode GenData \ud83d\udc69\u200d\ud83d\udcbb", + "GenData Stacker \ud83d\udc69\u200d\ud83d\udcbb", + "LoRA Stack to String \ud83d\udc69\u200d\ud83d\udcbb", + "LoRA Stacker From Prompt \ud83d\udc69\u200d\ud83d\udcbb", + "Load Checkpoints From File \ud83d\udc69\u200d\ud83d\udcbb", + "Load GenData From Dir \ud83d\udc69\u200d\ud83d\udcbb", + "Parse GenData \ud83d\udc69\u200d\ud83d\udcbb", + "Save Image From GenData \ud83d\udc69\u200d\ud83d\udcbb", + "VAE From String \ud83d\udc69\u200d\ud83d\udcbb", + "VAE to String \ud83d\udc69\u200d\ud83d\udcbb", + "\u00d7 Product CheckpointXGenDatas \ud83d\udc69\u200d\ud83d\udcbb" + ], + { + "title_aux": "Gen Data Tester [WIP]" + } + ], + "https://github.com/blepping/ComfyUI-sonar": [ + [ + "SamplerSonarEuler", + "SamplerSonarEulerA" + ], + { + "title_aux": "ComfyUI-sonar (WIP)" + } + ], + "https://github.com/comfyanonymous/ComfyUI": [ + [ + "BasicScheduler", + "CLIPLoader", + "CLIPMergeSimple", + "CLIPSave", + "CLIPSetLastLayer", + "CLIPTextEncode", + "CLIPTextEncodeSDXL", + "CLIPTextEncodeSDXLRefiner", + "CLIPVisionEncode", + "CLIPVisionLoader", + "Canny", + "CheckpointLoader", + "CheckpointLoaderSimple", + "CheckpointSave", + "ConditioningAverage", + "ConditioningCombine", + "ConditioningConcat", + "ConditioningSetArea", + "ConditioningSetAreaPercentage", + "ConditioningSetAreaStrength", + "ConditioningSetMask", + "ConditioningSetTimestepRange", + "ConditioningZeroOut", + "ControlNetApply", + "ControlNetApplyAdvanced", + "ControlNetLoader", + "CropMask", + "DiffControlNetLoader", + "DiffusersLoader", + "DualCLIPLoader", + "EmptyImage", + "EmptyLatentImage", + "ExponentialScheduler", + "FeatherMask", + "FlipSigmas", + "FreeU", + "FreeU_V2", + "GLIGENLoader", + "GLIGENTextBoxApply", + "GrowMask", + "HyperTile", + "HypernetworkLoader", + "ImageBatch", + "ImageBlend", + "ImageBlur", + "ImageColorToMask", + "ImageCompositeMasked", + "ImageCrop", + "ImageInvert", + "ImageOnlyCheckpointLoader", + "ImageOnlyCheckpointSave", + "ImagePadForOutpaint", + "ImageQuantize", + "ImageScale", + "ImageScaleBy", + "ImageScaleToTotalPixels", + "ImageSharpen", + "ImageToMask", + "ImageUpscaleWithModel", + "InpaintModelConditioning", + "InvertMask", + "JoinImageWithAlpha", + "KSampler", + "KSamplerAdvanced", + "KSamplerSelect", + "KarrasScheduler", + "LatentAdd", + "LatentBatch", + "LatentBatchSeedBehavior", + "LatentBlend", + "LatentComposite", + "LatentCompositeMasked", + "LatentCrop", + "LatentFlip", + "LatentFromBatch", + "LatentInterpolate", + "LatentMultiply", + "LatentRotate", + "LatentSubtract", + "LatentUpscale", + "LatentUpscaleBy", + "LoadImage", + "LoadImageMask", + "LoadLatent", + "LoraLoader", + "LoraLoaderModelOnly", + "MaskComposite", + "MaskToImage", + "ModelMergeAdd", + "ModelMergeBlocks", + "ModelMergeSimple", + "ModelMergeSubtract", + "ModelSamplingContinuousEDM", + "ModelSamplingDiscrete", + "PatchModelAddDownscale", + "PerpNeg", + "PhotoMakerEncode", + "PhotoMakerLoader", + "PolyexponentialScheduler", + "PorterDuffImageComposite", + "PreviewImage", + "RebatchImages", + "RebatchLatents", + "RepeatImageBatch", + "RepeatLatentBatch", + "RescaleCFG", + "SDTurboScheduler", + "SD_4XUpscale_Conditioning", + "SVD_img2vid_Conditioning", + "SamplerCustom", + "SamplerDPMPP_2M_SDE", + "SamplerDPMPP_SDE", + "SaveAnimatedPNG", + "SaveAnimatedWEBP", + "SaveImage", + "SaveLatent", + "SelfAttentionGuidance", + "SetLatentNoiseMask", + "SolidMask", + "SplitImageWithAlpha", + "SplitSigmas", + "StableZero123_Conditioning", + "StableZero123_Conditioning_Batched", + "StyleModelApply", + "StyleModelLoader", + "TomePatchModel", + "UNETLoader", + "UpscaleModelLoader", + "VAEDecode", + "VAEDecodeTiled", + "VAEEncode", + "VAEEncodeForInpaint", + "VAEEncodeTiled", + "VAELoader", + "VAESave", + "VPScheduler", + "VideoLinearCFGGuidance", + "unCLIPCheckpointLoader", + "unCLIPConditioning" + ], + { + "title_aux": "ComfyUI" + } + ], + "https://github.com/dnl13/ComfyUI-dnl13-seg": [ + [ + "Automatic Segmentation (dnl13)", + "BatchSelector (dnl13)", + "Combine Images By Mask (dnl13)", + "Dinov1 Model Loader (dnl13)", + "Mask with prompt (dnl13)", + "RGB (dnl13)", + "SAM Model Loader (dnl13)" + ], + { + "title_aux": "ComfyUI-dnl13-seg" + } + ], + "https://github.com/doucx/ComfyUI_WcpD_Utility_Kit": [ + [ + "BlackImage", + "CopyImage(Wayland)", + "ExecStrAsCode", + "MergeStrings", + "YamlToPrompt" + ], + { + "title_aux": "ComfyUI_WcpD_Utility_Kit" + } + ], + "https://github.com/eigenpunk/ComfyUI-audio": [ + [ + "ApplyVoiceFixer", + "BatchAudio", + "ClipAudio", + "CombineImageWithAudio", + "ConcatAudio", + "ConvertAudio", + "FlattenAudioBatch", + "LoadAudio", + "MusicgenGenerate", + "MusicgenHFGenerate", + "MusicgenHFLoader", + "MusicgenLoader", + "PreviewAudio", + "SaveAudio", + "SpectrogramImage", + "TortoiseTTSGenerate", + "TortoiseTTSLoader", + "VALLEXGenerator", + "VALLEXLoader", + "VALLEXVoicePromptFromAudio", + "VALLEXVoicePromptLoader" + ], + { + "title_aux": "ComfyUI-audio" + } + ], + "https://github.com/flowtyone/comfyui-flowty-lcm": [ + [ + "LCMSampler" + ], + { + "title_aux": "comfyui-flowty-lcm" + } + ], + "https://github.com/foglerek/comfyui-cem-tools": [ + [ + "ProcessImageBatch" + ], + { + "title_aux": "comfyui-cem-tools" + } + ], + "https://github.com/gameltb/ComfyUI_stable_fast": [ + [ + "ApplyStableFastUnet", + "ApplyTensorRTControlNet", + "ApplyTensorRTUnet", + "ApplyTensorRTVaeDecoder" + ], + { + "title_aux": "ComfyUI_stable_fast" + } + ], + "https://github.com/ilovejohnwhite/UncleBillyGoncho": [ + [ + "CannyEdgePreprocessor", + "HintImageEnchance", + "ImageGenResolutionFromImage", + "ImageGenResolutionFromLatent", + "LineArtPreprocessor", + "LinkMasterNode", + "PiDiNetPreprocessor", + "PixelPerfectResolution", + "SuckerPunch", + "UWU_Preprocessor", + "VooDooNode", + "VooDooNode2" + ], + { + "title_aux": "TatToolkit" + } + ], + "https://github.com/jn-jairo/jn_node_suite_comfyui": [ + [ + "JN_AreaInfo", + "JN_AreaNormalize", + "JN_AreaWidthHeight", + "JN_AreaXY", + "JN_Blip", + "JN_BlipLoader", + "JN_BooleanOperation", + "JN_Condition", + "JN_CoolDown", + "JN_CoolDownOutput", + "JN_CropFace", + "JN_DatetimeFormat", + "JN_DatetimeInfo", + "JN_DatetimeNow", + "JN_Dump", + "JN_DumpOutput", + "JN_FaceRestoreModelLoader", + "JN_FaceRestoreWithModel", + "JN_FirstActive", + "JN_ImageAddMask", + "JN_ImageBatch", + "JN_ImageCenterArea", + "JN_ImageCrop", + "JN_ImageGrid", + "JN_ImageInfo", + "JN_ImageSharpness", + "JN_ImageSquare", + "JN_ImageUncrop", + "JN_KSampler", + "JN_KSamplerAdvancedParams", + "JN_KSamplerFaceRestoreParams", + "JN_KSamplerResizeInputParams", + "JN_KSamplerResizeMaskAreaParams", + "JN_KSamplerResizeOutputParams", + "JN_KSamplerSeamlessParams", + "JN_KSamplerTileParams", + "JN_LoadImageDirectory", + "JN_LogicOperation", + "JN_MaskInfo", + "JN_MathOperation", + "JN_MathOperationArray", + "JN_PrimitiveArrayInfo", + "JN_PrimitiveBatchToArray", + "JN_PrimitiveBoolean", + "JN_PrimitiveFloat", + "JN_PrimitiveInt", + "JN_PrimitivePrompt", + "JN_PrimitiveString", + "JN_PrimitiveStringMultiline", + "JN_PrimitiveStringToArray", + "JN_PrimitiveToArray", + "JN_PrimitiveToBoolean", + "JN_PrimitiveToFloat", + "JN_PrimitiveToInt", + "JN_PrimitiveToString", + "JN_RemoveBackground", + "JN_Seamless", + "JN_SeamlessBorder", + "JN_SeamlessBorderCrop", + "JN_SelectItem", + "JN_Sleep", + "JN_SleepOutput", + "JN_SliceOperation", + "JN_StopIf", + "JN_StopIfOutput", + "JN_TextConcatenation", + "JN_TextReplace", + "JN_TimedeltaFormat", + "JN_TimedeltaInfo", + "JN_VAEPatch" + ], + { + "title_aux": "jn_node_suite_comfyui [WIP]" + } + ], + "https://github.com/kadirnar/ComfyUI-Transformers": [ + [ + "DepthEstimationPipeline" + ], + { + "title_aux": "ComfyUI-Transformers" + } + ], + "https://github.com/kadirnar/comfyui_helpers": [ + [ + "CLIPSeg", + "CircularVAEDecode", + "CombineMasks", + "CustomKSamplerAdvancedTile", + "ImageLoaderAndProcessor", + "ImageToContrastMask", + "JDC_AutoContrast", + "JDC_BlendImages", + "JDC_BrownNoise", + "JDC_Contrast", + "JDC_EqualizeGrey", + "JDC_GaussianBlur", + "JDC_GreyNoise", + "JDC_Greyscale", + "JDC_ImageLoader", + "JDC_ImageLoaderMeta", + "JDC_PinkNoise", + "JDC_Plasma", + "JDC_PlasmaSampler", + "JDC_PowerImage", + "JDC_RandNoise", + "JDC_ResizeFactor", + "LatentGaussianNoise", + "LatentToRGB", + "MathEncode", + "NoisyLatentPerlin" + ], + { + "title_aux": "comfyui_helpers" + } + ], + "https://github.com/kappa54m/ComfyUI_Usability": [ + [ + "LoadImageByPath", + "LoadImageDedup" + ], + { + "title_aux": "ComfyUI_Usability (WIP)" + } + ], + "https://github.com/komojini/ComfyUI_Prompt_Template_CustomNodes/raw/main/prompt_with_template.py": [ + [ + "ObjectPromptWithTemplate", + "PromptWithTemplate" + ], + { + "title_aux": "ComfyUI_Prompt_Template_CustomNodes" + } + ], + "https://github.com/laksjdjf/ssd-1b-comfyui": [ + [ + "SSD-1B-Loader" + ], + { + "title_aux": "ssd-1b-comfyui" + } + ], + "https://github.com/ltdrdata/ComfyUI-Workflow-Component": [ + [ + "ComboToString", + "ExecutionBlocker", + "ExecutionControlString", + "ExecutionOneOf", + "ExecutionSwitch", + "InputUnzip", + "InputZip", + "LoopControl", + "LoopCounterCondition", + "OptionalTest", + "TensorToCPU" + ], + { + "title_aux": "ComfyUI-Workflow-Component [WIP]" + } + ], + "https://github.com/nidefawl/ComfyUI-nidefawl": [ + [ + "BlendImagesWithBoundedMasks", + "CropImagesWithMasks", + "CustomCallback", + "DisplayAnyType", + "EmptyImageWithColor", + "ImageToLatent", + "LatentPerlinNoise", + "LatentScaledNoise", + "LatentToImage", + "MaskFromColor", + "ModelSamplerTonemapNoiseTest", + "PythonScript", + "ReferenceOnlySimple", + "SamplerCustomCallback", + "SamplerDPMPP_2M_SDE_nidefawl", + "SetLatentCustomNoise", + "SplitCustomSigmas", + "VAELoaderDataType", + "gcLatentTunnel" + ], + { + "title_aux": "ComfyUI-nidefawl [UNSAFE]" + } + ], + "https://github.com/nkchocoai/ComfyUI-PromptUtilities": [ + [ + "PromptUtilitiesConstString", + "PromptUtilitiesConstStringMultiLine", + "PromptUtilitiesFormatString", + "PromptUtilitiesJoinStringList", + "PromptUtilitiesLoadPreset", + "PromptUtilitiesLoadPresetAdvanced" + ], + { + "title_aux": "ComfyUI-PromptUtilities" + } + ], + "https://github.com/oyvindg/ComfyUI-TrollSuite": [ + [ + "BinaryImageMask", + "ImagePadding", + "LoadLastImage", + "RandomMask", + "TransparentImage" + ], + { + "title_aux": "ComfyUI-TrollSuite" + } + ], + "https://github.com/phineas-pta/comfy-trt-test": [ + [ + "TRT_Lora_Loader", + "TRT_Torch_Compile", + "TRT_Unet_Loader" + ], + { + "author": "PTA", + "description": "attempt to use TensorRT with ComfyUI, not yet compatible with ComfyUI-Manager, see README for instructions", + "nickname": "comfy trt test", + "title": "TensorRT with ComfyUI (work-in-progress)", + "title_aux": "comfy-trt-test [WIP]" + } + ], + "https://github.com/poisenbery/NudeNet-Detector-Provider": [ + [ + "NudeNetDetectorProvider" + ], + { + "title_aux": "NudeNet-Detector-Provider [WIP]" + } + ], + "https://github.com/prismwastaken/comfyui-tools": [ + [ + "Prism-RandomNormal" + ], + { + "title_aux": "prism-tools" + } + ], + "https://github.com/unanan/ComfyUI-clip-interrogator": [ + [ + "ComfyUIClipInterrogator", + "ShowText" + ], + { + "title_aux": "ComfyUI-clip-interrogator [WIP]" + } + ], + "https://github.com/wormley/comfyui-wormley-nodes": [ + [ + "CheckpointVAELoaderSimpleText", + "CheckpointVAESelectorText", + "LoRA_Tag_To_Stack" + ], + { + "title_aux": "comfyui-wormley-nodes" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/dev/model-list.json b/custom_nodes/ComfyUI-Manager/node_db/dev/model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..8e3e1dc4858a08aa46190aa53ba320d565206cf4 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/dev/model-list.json @@ -0,0 +1,3 @@ +{ + "models": [] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/dev/scan.sh b/custom_nodes/ComfyUI-Manager/node_db/dev/scan.sh new file mode 100755 index 0000000000000000000000000000000000000000..f9589f3c57268b258caa19e8569cc6f1d1882eae --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/dev/scan.sh @@ -0,0 +1,3 @@ +#!/bin/bash +rm ~/.tmp/dev/*.py > /dev/null 2>&1 +python ../../scanner.py ~/.tmp/dev diff --git a/custom_nodes/ComfyUI-Manager/node_db/forked/custom-node-list.json b/custom_nodes/ComfyUI-Manager/node_db/forked/custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..3bd8ce03e8638924d4dcb07e477cadac30b40ab5 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/forked/custom-node-list.json @@ -0,0 +1,14 @@ +{ + "custom_nodes": [ + { + "author": "gameltb", + "title": "comfyui-stablsr", + "reference": "https://github.com/gameltb/Comfyui-StableSR", + "files": [ + "https://github.com/gameltb/Comfyui-StableSR" + ], + "install_type": "git-clone", + "description": "This is a development respository for debugging migration of StableSR to ComfyUI\n\nNOTE:Forked from [https://github.com/gameltb/Comfyui-StableSR]\nPut the StableSR [a/webui_786v_139.ckpt](https://huggingface.co/Iceclear/StableSR/resolve/main/webui_768v_139.ckpt) model into Comyfui/models/stablesr/, Put the StableSR [a/stablesr_768v_000139.ckpt](https://huggingface.co/Iceclear/StableSR/resolve/main/stablesr_768v_000139.ckpt) model into Comyfui/models/checkpoints/" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/forked/extension-node-map.json b/custom_nodes/ComfyUI-Manager/node_db/forked/extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..9e26dfeeb6e641a33dae4961196235bdb965b21b --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/forked/extension-node-map.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/forked/model-list.json b/custom_nodes/ComfyUI-Manager/node_db/forked/model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..8e3e1dc4858a08aa46190aa53ba320d565206cf4 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/forked/model-list.json @@ -0,0 +1,3 @@ +{ + "models": [] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/forked/scan.sh b/custom_nodes/ComfyUI-Manager/node_db/forked/scan.sh new file mode 100755 index 0000000000000000000000000000000000000000..5d8d8c48b6e3f48dc1491738c1226f574909c05d --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/forked/scan.sh @@ -0,0 +1,4 @@ +#!/bin/bash +source ../../../../venv/bin/activate +rm .tmp/*.py > /dev/null +python ../../scanner.py diff --git a/custom_nodes/ComfyUI-Manager/node_db/legacy/alter-list.json b/custom_nodes/ComfyUI-Manager/node_db/legacy/alter-list.json new file mode 100644 index 0000000000000000000000000000000000000000..9e26dfeeb6e641a33dae4961196235bdb965b21b --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/legacy/alter-list.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/legacy/custom-node-list.json b/custom_nodes/ComfyUI-Manager/node_db/legacy/custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..a5671a2035ab832b61b119e2d921528e6837a64c --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/legacy/custom-node-list.json @@ -0,0 +1,235 @@ +{ + "custom_nodes": [ + { + "author": "#NOTICE_1.13", + "title": "NOTICE: This channel is not the default channel.", + "reference": "https://github.com/ltdrdata/ComfyUI-Manager", + "files": [], + "install_type": "git-clone", + "description": "If you see this message, your ComfyUI-Manager is outdated.\nLegacy channel provides only the list of the deprecated nodes. If you want to find the complete node list, please go to the Default channel." + }, + + + { + "author": "ccvv804", + "title": "ComfyUI StableCascade using diffusers for Low VRAM [DEPRECATED]", + "reference": "https://github.com/ccvv804/ComfyUI-DiffusersStableCascade-LowVRAM", + "files": [ + "https://github.com/ccvv804/ComfyUI-DiffusersStableCascade-LowVRAM" + ], + "install_type": "git-clone", + "description": "Works with RTX 4070ti 12GB.\nSimple quick wrapper for [a/https://huggingface.co/stabilityai/stable-cascade](https://huggingface.co/stabilityai/stable-cascade)\nComfy is going to implement this properly soon, this repo is just for quick testing for the impatient!" + }, + { + "author": "kijai", + "title": "ComfyUI StableCascade using diffusers [DEPRECATED]", + "reference": "https://github.com/kijai/ComfyUI-DiffusersStableCascade", + "files": [ + "https://github.com/kijai/ComfyUI-DiffusersStableCascade" + ], + "install_type": "git-clone", + "description": "Simple quick wrapper for [a/https://huggingface.co/stabilityai/stable-cascade](https://huggingface.co/stabilityai/stable-cascade)\nComfy is going to implement this properly soon, this repo is just for quick testing for the impatient!" + }, + { + "author": "solarpush", + "title": "comfyui_sendimage_node [REMOVED]", + "reference": "https://github.com/solarpush/comfyui_sendimage_node", + "files": [ + "https://github.com/solarpush/comfyui_sendimage_node" + ], + "install_type": "git-clone", + "description": "Send images to the pod." + }, + { + "author": "azazeal04", + "title": "ComfyUI-Styles", + "reference": "https://github.com/azazeal04/ComfyUI-Styles", + "files": [ + "https://github.com/azazeal04/ComfyUI-Styles" + ], + "install_type": "git-clone", + "description": "Nodes:Anime_Styler, Fantasy_Styler, Gothic_Styler, Line_Art_Styler, Movie_Poster_Styler, Punk_Styler, Travel_Poster_Styler. This extension offers 8 art style nodes, each of which includes approximately 50 individual style variations.\n\nNOTE: Due to the dynamic nature of node name definitions, ComfyUI-Manager cannot recognize the node list from this extension. The Missing nodes and Badge features are not available for this extension.\nNOTE: This extension is removed. Users who were previously using this node should install ComfyUI-styles-all instead." + }, + { + "author": "hnmr293", + "title": "ComfyUI-nodes-hnmr", + "reference": "https://github.com/hnmr293/ComfyUI-nodes-hnmr", + "files": [ + "https://github.com/hnmr293/ComfyUI-nodes-hnmr" + ], + "install_type": "git-clone", + "description": "Provide various custom nodes for Latent, Sampling, Model, Loader, Image, Text" + }, + { + "author": "bvhari", + "title": "ComfyUI_PerpNeg [WIP]", + "reference": "https://github.com/bvhari/ComfyUI_PerpNeg", + "files": [ + "https://github.com/bvhari/ComfyUI_PerpNeg" + ], + "install_type": "git-clone", + "description": "Nodes: KSampler (Advanced + Perp-Neg). Implementation of [a/Perp-Neg](https://perp-neg.github.io/)\nIncludes Tonemap and CFG Rescale optionsComfyUI custom node to convert latent to RGB.[w/WARNING: Experimental code, might have incompatibilities and edge cases.]\nNOTE: In the latest version of ComfyUI, this extension is included as built-in." + }, + { + "author": "laksjdjf", + "title": "IPAdapter-ComfyUI", + "reference": "https://github.com/laksjdjf/IPAdapter-ComfyUI", + "files": [ + "https://github.com/laksjdjf/IPAdapter-ComfyUI" + ], + "install_type": "git-clone", + "description": "This custom nodes provides loader of the IP-Adapter model.[w/NOTE: To use this extension node, you need to download the [a/ip-adapter_sd15.bin](https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15.bin) file and place it in the %%**custom_nodes/IPAdapter-ComfyUI/models**%% directory. Additionally, you need to download the 'Clip vision model' from the 'Install models' menu as well.]
NOTE: Use ComfyUI_IPAdapter_plus instead of this." + }, + { + "author": "RockOfFire", + "title": "CR Animation Nodes", + "reference": "https://github.com/RockOfFire/CR_Animation_Nodes", + "files": [ + "https://github.com/RockOfFire/CR_Animation_Nodes" + ], + "install_type": "git-clone", + "description": "A comprehensive suite of nodes to enhance your animations. These nodes include some features similar to Deforum, and also some new ideas.
NOTE: This node is merged into Comfyroll Custom Nodes." + }, + { + "author": "tkoenig89", + "title": "Load Image with metadata", + "reference": "https://github.com/tkoenig89/ComfyUI_Load_Image_With_Metadata", + "files": [ + "https://github.com/tkoenig89/ComfyUI_Load_Image_With_Metadata" + ], + "install_type": "git-clone", + "description": "A custom node for comfy ui to read generation data from images (prompt, seed, size...). This could be used when upscaling generated images to use the original prompt and seed." + }, + { + "author": "LucianoCirino", + "title": "Efficiency Nodes for ComfyUI [LEGACY]", + "reference": "https://github.com/LucianoCirino/efficiency-nodes-comfyui", + "files": [ + "https://github.com/LucianoCirino/efficiency-nodes-comfyui" + ], + "install_type": "git-clone", + "description": "A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count.
NOTE: This repository is the original repository but is no longer maintained. Please use the forked version by jags." + }, + { + "author": "GeLi1989", + "title": "roop nodes for ComfyUI", + "reference": "https://github.com/GeLi1989/GK-beifen-ComfyUI_roop", + "files": [ + "https://github.com/GeLi1989/GK-beifen-ComfyUI_roop" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes for the roop A1111 webui script. NOTE: Need to download model to use this node. NOTE: This is removed." + }, + { + "author": "ProDALOR", + "title": "comfyui_u2net", + "reference": "https://github.com/ProDALOR/comfyui_u2net", + "files": [ + "https://github.com/ProDALOR/comfyui_u2net" + ], + "install_type": "git-clone", + "description": "Nodes: Load U2Net model, U2Net segmentation, To mask, Segmentation to mask, U2NetBaseNormalization, U2NetMaxNormalization. NOTE: This is removed." + }, + { + "author": "FizzleDorf", + "title": "AIT", + "reference": "https://github.com/FizzleDorf/AIT", + "files": [ + "https://github.com/FizzleDorf/AIT" + ], + "install_type": "git-clone", + "description": "Nodes: Load AITemplate, Load AITemplate (ControlNet), VAE Decode (AITemplate), VAE Encode (AITemplate), VAE Encode (AITemplate, Inpaint). Experimental usage of AITemplate. NOTE: This is deprecated extension. Use ComfyUI-AIT instead of this." + }, + { + "author": "chenbaiyujason", + "title": "sc-node-comfyui", + "reference": "https://github.com/chenbaiyujason/sc-node-comfyui", + "files": [ + "https://github.com/chenbaiyujason/sc-node-comfyui" + ], + "install_type": "git-clone", + "description": "Nodes for GPT interaction and text manipulation" + }, + { + "author": "asd417", + "title": "CheckpointTomeLoader", + "reference": "https://github.com/asd417/tomeSD_for_Comfy", + "files": [ + "https://github.com/ltdrdata/ComfyUI-tomeSD-installer" + ], + "install_type": "git-clone", + "description": "tomeSD(https://github.com/dbolya/tomesd) applied to ComfyUI stable diffusion UI using custom node. Note:In vanilla ComfyUI, the TomePatchModel node is provided as a built-in feature." + }, + { + "author": "gamert", + "title": "ComfyUI_tagger", + "reference": "https://github.com/gamert/ComfyUI_tagger", + "pip": ["gradio"], + "files": [ + "https://github.com/gamert/ComfyUI_tagger" + ], + "install_type": "git-clone", + "description": "Nodes: CLIPTextEncodeTaggerDD, ImageTaggerDD.

WARNING: Installing the current version is causing an issue where ComfyUI fails to start.

" + }, + { + "author": "Fannovel16", + "title": "ControlNet Preprocessors", + "reference": "https://github.com/Fannovel16/comfy_controlnet_preprocessors", + "files": [ + "https://github.com/Fannovel16/comfy_controlnet_preprocessors" + ], + "install_type": "git-clone", + "description": "ControlNet Preprocessors. (To use this extension, you need to download the required model file from Install Models)

NOTE: Please uninstall this custom node and instead install 'ComfyUI's ControlNet Auxiliary Preprocessors' from the default channel.
To use nodes belonging to controlnet v1 such as Canny_Edge_Preprocessor, MIDAS_Depth_Map_Preprocessor, Uniformer_SemSegPreprocessor, etc., you need to copy the config.yaml.example file to config.yaml and change skip_v1: True to skip_v1: False.

" + }, + { + "author": "comfyanonymous", + "title": "ComfyUI_experiments/sampler_tonemap", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments/raw/master/sampler_tonemap.py" + ], + "install_type": "copy", + "description": "ModelSamplerTonemapNoiseTest a node that makes the sampler use a simple tonemapping algorithm to tonemap the noise. It will let you use higher CFG without breaking the image. To using higher CFG lower the multiplier value. Similar to Dynamic Thresholding extension of A1111. " + }, + { + "author": "comfyanonymous", + "title": "ComfyUI_experiments/sampler_rescalecfg", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments/raw/master/sampler_rescalecfg.py" + ], + "install_type": "copy", + "description": "RescaleClassifierFreeGuidance improves the problem of images being degraded by high CFG.To using higher CFG lower the multiplier value. Similar to Dynamic Thresholding extension of A1111. (reference paper)

It is recommended to use the integrated custom nodes in the default channel for update support rather than installing individual nodes.

" + }, + { + "author": "comfyanonymous", + "title": "ComfyUI_experiments/advanced_model_merging", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments/raw/master/advanced_model_merging.py" + ], + "install_type": "copy", + "description": "This provides a detailed model merge feature based on block weight. ModelMergeBlock, in vanilla ComfyUI, allows for adjusting the ratios of input/middle/output layers, but this node provides ratio adjustments for all blocks within each layer.

It is recommended to use the integrated custom nodes in the default channel for update support rather than installing individual nodes.

" + }, + { + "author": "comfyanonymous", + "title": "ComfyUI_experiments/sdxl_model_merging", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments/raw/master/sdxl_model_merging.py" + ], + "install_type": "copy", + "description": "These nodes provide the capability to merge SDXL base models.

It is recommended to use the integrated custom nodes in the default channel for update support rather than installing individual nodes.

" + }, + { + "author": "comfyanonymous", + "title": "ComfyUI_experiments/reference_only", + "reference": "https://github.com/comfyanonymous/ComfyUI_experiments", + "files": [ + "https://github.com/comfyanonymous/ComfyUI_experiments/raw/master/reference_only.py" + ], + "install_type": "copy", + "description": "This node provides functionality corresponding to Reference only in Controlnet.

It is recommended to use the integrated custom nodes in the default channel for update support rather than installing individual nodes.

" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/legacy/extension-node-map.json b/custom_nodes/ComfyUI-Manager/node_db/legacy/extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..9e26dfeeb6e641a33dae4961196235bdb965b21b --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/legacy/extension-node-map.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/legacy/model-list.json b/custom_nodes/ComfyUI-Manager/node_db/legacy/model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..8e3e1dc4858a08aa46190aa53ba320d565206cf4 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/legacy/model-list.json @@ -0,0 +1,3 @@ +{ + "models": [] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/new/alter-list.json b/custom_nodes/ComfyUI-Manager/node_db/new/alter-list.json new file mode 100644 index 0000000000000000000000000000000000000000..072c3bb5e8bd05b6f14f6df25386dc1e1010a137 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/new/alter-list.json @@ -0,0 +1,4 @@ +{ + "items": [ + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/new/custom-node-list.json b/custom_nodes/ComfyUI-Manager/node_db/new/custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..1b241afc34f95cc28f89109340630d0c67710abf --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/new/custom-node-list.json @@ -0,0 +1,695 @@ +{ + "custom_nodes": [ + { + "author": "#NOTICE_1.13", + "title": "NOTICE: This channel is not the default channel.", + "reference": "https://github.com/ltdrdata/ComfyUI-Manager", + "files": [], + "install_type": "git-clone", + "description": "If you see this message, your ComfyUI-Manager is outdated.\nRecent channel provides only the list of the latest nodes. If you want to find the complete node list, please go to the Default channel.\nMaking LoRA has never been easier!" + }, + + + { + "author": "Kijai", + "title": "Animatediff MotionLoRA Trainer", + "reference": "https://github.com/kijai/ComfyUI-ADMotionDirector", + "files": [ + "https://github.com/kijai/ComfyUI-ADMotionDirector" + ], + "install_type": "git-clone", + "description": "This is a trainer for AnimateDiff MotionLoRAs, based on the implementation of MotionDirector by ExponentialML." + }, + { + "author": "GavChap", + "title": "ComfyUI-CascadeResolutions", + "reference": "https://github.com/GavChap/ComfyUI-CascadeResolutions", + "files": [ + "https://github.com/GavChap/ComfyUI-CascadeResolutions" + ], + "install_type": "git-clone", + "description": "Nodes:Cascade Resolutions" + }, + { + "author": "blepping", + "title": "ComfyUI-sonar", + "reference": "https://github.com/blepping/ComfyUI-sonar", + "files": [ + "https://github.com/blepping/ComfyUI-sonar" + ], + "install_type": "git-clone", + "description": "A janky implementation of Sonar sampling (momentum-based sampling) for ComfyUI." + }, + { + "author": "StartHua", + "title": "comfyui_segformer_b2_clothes", + "reference": "https://github.com/StartHua/Comfyui_segformer_b2_clothes", + "files": [ + "https://github.com/StartHua/Comfyui_segformer_b2_clothes" + ], + "install_type": "git-clone", + "description": "SegFormer model fine-tuned on ATR dataset for clothes segmentation but can also be used for human segmentation!\nDownload the weight and put it under checkpoints: [a/https://huggingface.co/mattmdjaga/segformer_b2_clothes](https://huggingface.co/mattmdjaga/segformer_b2_clothes)" + }, + { + "author": "AshMartian", + "title": "Dir Gir", + "reference": "https://github.com/AshMartian/ComfyUI-DirGir", + "files": [ + "https://github.com/AshMartian/ComfyUI-DirGir/raw/main/dir_picker.py", + "https://github.com/AshMartian/ComfyUI-DirGir/raw/main/dir_loop.py" + ], + "install_type": "copy", + "description": "A collection of ComfyUI directory automation utility nodes. Directory Get It Right adds a GUI directory browser, and smart directory loop/iteration node that supports regex and file extension filtering." + }, + { + "author": "ccvv804", + "title": "ComfyUI StableCascade using diffusers for Low VRAM", + "reference": "https://github.com/ccvv804/ComfyUI-DiffusersStableCascade-LowVRAM", + "files": [ + "https://github.com/ccvv804/ComfyUI-DiffusersStableCascade-LowVRAM" + ], + "install_type": "git-clone", + "description": "Works with RTX 4070ti 12GB.\nSimple quick wrapper for [a/https://huggingface.co/stabilityai/stable-cascade](https://huggingface.co/stabilityai/stable-cascade)\nComfy is going to implement this properly soon, this repo is just for quick testing for the impatient!" + }, + { + "author": "yuvraj108c", + "title": "ComfyUI-Pronodes", + "reference": "https://github.com/yuvraj108c/ComfyUI-Pronodes", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Pronodes" + ], + "install_type": "git-clone", + "description": "A collection of nice utility nodes for ComfyUI" + }, + { + "author": "pkpkTech", + "title": "ComfyUI-SaveQueues", + "reference": "https://github.com/pkpkTech/ComfyUI-SaveQueues", + "files": [ + "https://github.com/pkpkTech/ComfyUI-SaveQueues" + ], + "install_type": "git-clone", + "description": "Add a button to the menu to save and load the running queue and the pending queues.\nThis is intended to be used when you want to exit ComfyUI with queues still remaining." + }, + { + "author": "jordoh", + "title": "ComfyUI Deepface", + "reference": "https://github.com/jordoh/ComfyUI-Deepface", + "files": [ + "https://github.com/jordoh/ComfyUI-Deepface" + ], + "install_type": "git-clone", + "description": "ComfyUI nodes wrapping the [a/deepface](https://github.com/serengil/deepface) library." + }, + { + "author": "kijai", + "title": "ComfyUI StableCascade using diffusers", + "reference": "https://github.com/kijai/ComfyUI-DiffusersStableCascade", + "files": [ + "https://github.com/kijai/ComfyUI-DiffusersStableCascade" + ], + "install_type": "git-clone", + "description": "Simple quick wrapper for [a/https://huggingface.co/stabilityai/stable-cascade](https://huggingface.co/stabilityai/stable-cascade)\nComfy is going to implement this properly soon, this repo is just for quick testing for the impatient!" + }, + { + "author": "Extraltodeus", + "title": "ComfyUI-AutomaticCFG", + "reference": "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG", + "files": [ + "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG" + ], + "install_type": "git-clone", + "description": "My own version 'from scratch' of a self-rescaling CFG. It isn't much but it's honest work.\nTLDR: set your CFG at 8 to try it. No burned images and artifacts anymore. CFG is also a bit more sensitive because it's a proportion around 8. Low scale like 4 also gives really nice results since your CFG is not the CFG anymore. Also in general even with relatively low settings it seems to improve the quality." + }, + { + "author": "Mamaaaamooooo", + "title": "Batch Rembg for ComfyUI", + "reference": "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes", + "files": [ + "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes" + ], + "install_type": "git-clone", + "description": "Remove background of plural images." + }, + { + "author": "ShmuelRonen", + "title": "ComfyUI-SVDResizer", + "reference": "https://github.com/ShmuelRonen/ComfyUI-SVDResizer", + "files": [ + "https://github.com/ShmuelRonen/ComfyUI-SVDResizer" + ], + "install_type": "git-clone", + "description": "SVDResizer is a helper for resizing the source image, according to the sizes enabled in Stable Video Diffusion. The rationale behind the possibility of changing the size of the image in steps between the ranges of 576 and 1024, is the use of the greatest common denominator of these two numbers which is 64. SVD is lenient with resizing that adheres to this rule, so the chance of coherent video that is not the standard size of 576X1024 is greater. It is advisable to keep the value 1024 constant and play with the second size to maintain the stability of the result." + }, + { + "author": "xiaoxiaodesha", + "title": "hd-nodes-comfyui", + "reference": "https://github.com/xiaoxiaodesha/hd_node", + "files": [ + "https://github.com/xiaoxiaodesha/hd_node" + ], + "install_type": "git-clone", + "description": "Nodes:Combine HDMasks, Cover HDMasks, HD FaceIndex, HD SmoothEdge, HD GetMaskArea, HD Image Levels, HD Ultimate SD Upscale" + }, + { + "author": "StartHua", + "title": "Comfyui_joytag", + "reference": "https://github.com/StartHua/Comfyui_joytag", + "files": [ + "https://github.com/StartHua/Comfyui_joytag" + ], + "install_type": "git-clone", + "description": "JoyTag is a state of the art AI vision model for tagging images, with a focus on sex positivity and inclusivity. It uses the Danbooru tagging schema, but works across a wide range of images, from hand drawn to photographic.\nDownload the weight and put it under checkpoints: [a/https://huggingface.co/fancyfeast/joytag/tree/main](https://huggingface.co/fancyfeast/joytag/tree/main)" + }, + { + "author": "redhottensors", + "title": "ComfyUI-Prediction", + "reference": "https://github.com/redhottensors/ComfyUI-Prediction", + "files": [ + "https://github.com/redhottensors/ComfyUI-Prediction" + ], + "install_type": "git-clone", + "description": "Fully customizable Classifier Free Guidance for ComfyUI." + }, + { + "author": "nkchocoai", + "title": "ComfyUI-TextOnSegs", + "reference": "https://github.com/nkchocoai/ComfyUI-TextOnSegs", + "files": [ + "https://github.com/nkchocoai/ComfyUI-TextOnSegs" + ], + "install_type": "git-clone", + "description": "Add a node for drawing text with CR Draw Text of ComfyUI_Comfyroll_CustomNodes to the area of SEGS detected by Ultralytics Detector of ComfyUI-Impact-Pack." + }, + { + "author": "cubiq", + "title": "ComfyUI InstantID (Native Support)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID", + "files": [ + "https://github.com/cubiq/ComfyUI_InstantID" + ], + "install_type": "git-clone", + "description": "Native [a/InstantID](https://github.com/InstantID/InstantID) support for ComfyUI.\nThis extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI.\nPlease note this still could be considered beta stage, looking forward to your feedback." + }, + { + "author": "Franck-Demongin", + "title": "NX_PromptStyler", + "reference": "https://github.com/Franck-Demongin/NX_PromptStyler", + "files": [ + "https://github.com/Franck-Demongin/NX_PromptStyler" + ], + "install_type": "git-clone", + "description": "A custom node for ComfyUI to create a prompt based on a list of keywords saved in CSV files." + }, + { + "author": "Billius-AI", + "title": "ComfyUI-Path-Helper", + "reference": "https://github.com/Billius-AI/ComfyUI-Path-Helper", + "files": [ + "https://github.com/Billius-AI/ComfyUI-Path-Helper" + ], + "install_type": "git-clone", + "description": "Nodes:Create Project Root, Add Folder, Add Folder Advanced, Add File Name Prefix, Add File Name Prefix Advanced, ShowPath" + }, + { + "author": "mbrostami", + "title": "ComfyUI-HF", + "reference": "https://github.com/mbrostami/ComfyUI-HF", + "files": [ + "https://github.com/mbrostami/ComfyUI-HF" + ], + "install_type": "git-clone", + "description": "ComfyUI Node to work with Hugging Face repositories" + }, + { + "author": "digitaljohn", + "title": "ComfyUI-ProPost", + "reference": "https://github.com/digitaljohn/comfyui-propost", + "files": [ + "https://github.com/digitaljohn/comfyui-propost" + ], + "install_type": "git-clone", + "description": "A set of custom ComfyUI nodes for performing basic post-processing effects including Film Grain and Vignette. These effects can help to take the edge off AI imagery and make them feel more natural." + }, + { + "author": "deforum", + "title": "Deforum Nodes", + "reference": "https://github.com/XmYx/deforum-comfy-nodes", + "files": [ + "https://github.com/XmYx/deforum-comfy-nodes" + ], + "install_type": "git-clone", + "description": "Official Deforum animation pipeline tools that provide a unique way to create frame-by-frame generative motion art." + }, + { + "author": "adbrasi", + "title": "ComfyUI-TrashNodes-DownloadHuggingface", + "reference": "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface", + "files": [ + "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface" + ], + "install_type": "git-clone", + "description": "ComfyUI-TrashNodes-DownloadHuggingface is a ComfyUI node designed to facilitate the download of models you have just trained and uploaded to Hugging Face. This node is particularly useful for users who employ Google Colab for training and need to quickly download their models for deployment." + }, + { + "author": "DonBaronFactory", + "title": "ComfyUI-Cre8it-Nodes", + "reference": "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes", + "files": [ + "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:CRE8IT Serial Prompter, CRE8IT Apply Serial Prompter, CRE8IT Image Sizer. A few simple nodes to facilitate working wiht ComfyUI Workflows" + }, + { + "author": "dezi-ai", + "title": "ComfyUI Animate LCM", + "reference": "https://github.com/dezi-ai/ComfyUI-AnimateLCM", + "files": [ + "https://github.com/dezi-ai/ComfyUI-AnimateLCM" + ], + "install_type": "git-clone", + "description": "ComfyUI implementation for [a/AnimateLCM](https://animatelcm.github.io/) [[a/paper](https://arxiv.org/abs/2402.00769)]." + }, + { + "author": "kadirnar", + "title": "ComfyUI-Transformers", + "reference": "https://github.com/kadirnar/ComfyUI-Transformers", + "files": [ + "https://github.com/kadirnar/ComfyUI-Transformers" + ], + "install_type": "git-clone", + "description": "ComfyUI-Transformers is a cutting-edge project combining the power of computer vision and natural language processing to create intuitive and user-friendly interfaces. Our goal is to make technology more accessible and engaging." + }, + { + "author": "chaojie", + "title": "ComfyUI-DynamiCrafter", + "reference": "https://github.com/chaojie/ComfyUI-DynamiCrafter", + "files": [ + "https://github.com/chaojie/ComfyUI-DynamiCrafter" + ], + "install_type": "git-clone", + "description": "Better Dynamic, Higher Resolution, and Stronger Coherence!" + }, + { + "author": "bilal-arikan", + "title": "ComfyUI_TextAssets", + "reference": "https://github.com/bilal-arikan/ComfyUI_TextAssets", + "files": [ + "https://github.com/bilal-arikan/ComfyUI_TextAssets" + ], + "install_type": "git-clone", + "description": "With this node you can upload text files to input folder from your local computer." + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI SegMoE", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE" + ], + "install_type": "git-clone", + "description": "Unofficial implementation of [a/SegMoE: Segmind Mixture of Diffusion Experts](https://github.com/segmind/segmoe) for ComfyUI" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-SVD-ZHO (WIP)", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO" + ], + "install_type": "git-clone", + "description": "My Workflows + Auxiliary nodes for Stable Video Diffusion (SVD)" + }, + { + "author": "MarkoCa1", + "title": "ComfyUI_Segment_Mask", + "reference": "https://github.com/MarkoCa1/ComfyUI_Segment_Mask", + "files": [ + "https://github.com/MarkoCa1/ComfyUI_Segment_Mask" + ], + "install_type": "git-clone", + "description": "Mask cutout based on Segment Anything." + }, + { + "author": "antrobot", + "title": "antrobots-comfyUI-nodepack", + "reference": "https://github.com/antrobot1234/antrobots-comfyUI-nodepack", + "files": [ + "https://github.com/antrobot1234/antrobots-comfyUI-nodepack" + ], + "install_type": "git-clone", + "description": "A small node pack containing various things I felt like ought to be in base comfy-UI. Currently includes Some image handling nodes to help with inpainting, a version of KSampler (advanced) that allows for denoise, and a node that can swap it's inputs. Remember to make an issue if you experience any bugs or errors!" + }, + { + "author": "dfl", + "title": "comfyui-clip-with-break", + "reference": "https://github.com/dfl/comfyui-clip-with-break", + "files": [ + "https://github.com/dfl/comfyui-clip-with-break" + ], + "install_type": "git-clone", + "description": "Clip text encoder with BREAK formatting like A1111 (uses conditioning concat)" + }, + { + "author": "yffyhk", + "title": "comfyui_auto_danbooru", + "reference": "https://github.com/yffyhk/comfyui_auto_danbooru", + "files": [ + "https://github.com/yffyhk/comfyui_auto_danbooru" + ], + "install_type": "git-clone", + "description": "Nodes: Get Danbooru, Tag Encode" + }, + { + "author": "Clybius", + "title": "ComfyUI Extra Samplers", + "reference": "https://github.com/Clybius/ComfyUI-Extra-Samplers", + "files": [ + "https://github.com/Clybius/ComfyUI-Extra-Samplers" + ], + "install_type": "git-clone", + "description": "Nodes: SamplerCustomNoise, SamplerCustomNoiseDuo, SamplerCustomModelMixtureDuo, SamplerRES_Momentumized, SamplerDPMPP_DualSDE_Momentumized, SamplerCLYB_4M_SDE_Momentumized, SamplerTTM, SamplerLCMCustom\nThis extension provides various custom samplers not offered by the default nodes in ComfyUI." + }, + { + "author": "ttulttul", + "title": "ComfyUI-Tensor-Operations", + "reference": "https://github.com/ttulttul/ComfyUI-Tensor-Operations", + "files": [ + "https://github.com/ttulttul/ComfyUI-Tensor-Operations" + ], + "install_type": "git-clone", + "description": "This repo contains nodes for ComfyUI that implement some helpful operations on tensors, such as normalization." + }, + { + "author": "davask", + "title": "🐰 MarasIT Nodes", + "reference": "https://github.com/davask/ComfyUI-MarasIT-Nodes", + "files": [ + "https://github.com/davask/ComfyUI-MarasIT-Nodes" + ], + "install_type": "git-clone", + "description": "This is a revised version of the Bus node from the [a/Was Node Suite](https://github.com/WASasquatch/was-node-suite-comfyui) to integrate more input/output." + }, + { + "author": "chaojie", + "title": "ComfyUI-Panda3d", + "reference": "https://github.com/chaojie/ComfyUI-Panda3d", + "files": [ + "https://github.com/chaojie/ComfyUI-Panda3d" + ], + "install_type": "git-clone", + "description": "ComfyUI 3d engine" + }, + { + "author": "shadowcz007", + "title": "Consistency Decoder", + "reference": "https://github.com/shadowcz007/comfyui-consistency-decoder", + "files": [ + "https://github.com/shadowcz007/comfyui-consistency-decoder" + ], + "install_type": "git-clone", + "description": "[a/openai Consistency Decoder](https://github.com/openai/consistencydecoder). After downloading the [a/OpenAI VAE model](https://openaipublic.azureedge.net/diff-vae/c9cebd3132dd9c42936d803e33424145a748843c8f716c0814838bdc8a2fe7cb/decoder.pt), place it in the `model/vae` directory for use." + }, + { + "author": "pkpk", + "title": "ComfyUI-TemporaryLoader", + "reference": "https://github.com/pkpkTech/ComfyUI-TemporaryLoader", + "files": [ + "https://github.com/pkpkTech/ComfyUI-TemporaryLoader" + ], + "install_type": "git-clone", + "description": "This is a custom node of ComfyUI that downloads and loads models from the input URL. The model is temporarily downloaded into memory and not saved to storage.\nThis could be useful when trying out models or when using various models on machines with limited storage. Since the model is downloaded into memory, expect higher memory usage than usual." + }, + { + "author": "TemryL", + "title": "ComfyS3: Amazon S3 Integration for ComfyUI", + "reference": "https://github.com/TemryL/ComfyS3", + "files": [ + "https://github.com/TemryL/ComfyS3" + ], + "install_type": "git-clone", + "description": "ComfyS3 seamlessly integrates with [a/Amazon S3](https://aws.amazon.com/en/s3/) in ComfyUI. This open-source project provides custom nodes for effortless loading and saving of images, videos, and checkpoint models directly from S3 buckets within the ComfyUI graph interface." + }, + { + "author": "trumanwong", + "title": "ComfyUI-NSFW-Detection", + "reference": "https://github.com/trumanwong/ComfyUI-NSFW-Detection", + "files": [ + "https://github.com/trumanwong/ComfyUI-NSFW-Detection" + ], + "install_type": "git-clone", + "description": "An implementation of NSFW Detection for ComfyUI" + }, + { + "author": "AIGODLIKE", + "title": "AIGODLIKE-ComfyUI-Studio", + "reference": "https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Studio", + "files": [ + "https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Studio" + ], + "install_type": "git-clone", + "description": "Improve the interactive experience of using ComfyUI, such as making the loading of ComfyUI models more intuitive and making it easier to create model thumbnails" + }, + { + "author": "Chan-0312", + "title": "ComfyUI-IPAnimate", + "reference": "https://github.com/Chan-0312/ComfyUI-IPAnimate", + "files": [ + "https://github.com/Chan-0312/ComfyUI-IPAnimate" + ], + "install_type": "git-clone", + "description": "This is a project that generates videos frame by frame based on IPAdapter+ControlNet. Unlike [a/Steerable-motion](https://github.com/banodoco/Steerable-Motion), we do not rely on AnimateDiff. This decision is primarily due to the fact that the videos generated by AnimateDiff are often blurry. Through frame-by-frame control using IPAdapter+ControlNet, we can produce higher definition and more controllable videos." + }, + { + "author": "LyazS", + "title": "Anime Character Segmentation node for comfyui", + "reference": "https://github.com/LyazS/comfyui-anime-seg", + "files": [ + "https://github.com/LyazS/comfyui-anime-seg" + ], + "install_type": "git-clone", + "description": "A Anime Character Segmentation node for comfyui, based on [this hf space](https://huggingface.co/spaces/skytnt/anime-remove-background)." + }, + { + "author": "zhongpei", + "title": "ComfyUI for InstructIR", + "reference": "https://github.com/zhongpei/ComfyUI-InstructIR", + "files": [ + "https://github.com/zhongpei/ComfyUI-InstructIR" + ], + "install_type": "git-clone", + "description": "Enhancing Image Restoration. (ref:[a/InstructIR](https://github.com/mv-lab/InstructIR))" + }, + { + "author": "nosiu", + "title": "ComfyUI InstantID Faceswapper", + "reference": "https://github.com/nosiu/comfyui-instantId-faceswap", + "files": [ + "https://github.com/nosiu/comfyui-instantId-faceswap" + ], + "install_type": "git-clone", + "description": "Implementation of [a/faceswap](https://github.com/nosiu/InstantID-faceswap/tree/main) based on [a/InstantID](https://github.com/InstantID/InstantID) for ComfyUI. Allows usage of [a/LCM Lora](https://huggingface.co/latent-consistency/lcm-lora-sdxl) which can produce good results in only a few generation steps.\nNOTE:Works ONLY with SDXL checkpoints." + }, + { + "author": "ricklove", + "title": "comfyui-ricklove", + "reference": "https://github.com/ricklove/comfyui-ricklove", + "files": [ + "https://github.com/ricklove/comfyui-ricklove" + ], + "install_type": "git-clone", + "description": "Nodes: Image Crop and Resize by Mask, Image Uncrop, Image Shadow, Optical Flow (Dip), Warp Image with Flow, Image Threshold (Channels), Finetune Variable, Finetune Analyze, Finetune Analyze Batch, ... Misc ComfyUI nodes by Rick Love" + }, + { + "author": "chaojie", + "title": "ComfyUI-Pymunk", + "reference": "https://github.com/chaojie/ComfyUI-Pymunk", + "files": [ + "https://github.com/chaojie/ComfyUI-Pymunk" + ], + "install_type": "git-clone", + "description": "Pymunk is a easy-to-use pythonic 2d physics library that can be used whenever you need 2d rigid body physics from Python" + }, + { + "author": "ZHO-ZHO-ZHO", + "title": "ComfyUI-Qwen-VL-API", + "reference": "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API", + "files": [ + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API" + ], + "install_type": "git-clone", + "description": "QWen-VL-Plus & QWen-VL-Max in ComfyUI" + }, + { + "author": "shadowcz007", + "title": "comfyui-ultralytics-yolo", + "reference": "https://github.com/shadowcz007/comfyui-ultralytics-yolo", + "files": [ + "https://github.com/shadowcz007/comfyui-ultralytics-yolo" + ], + "install_type": "git-clone", + "description": "Nodes:Detect By Label." + }, + { + "author": "StartHua", + "title": "ComfyUI_Seg_VITON", + "reference": "https://github.com/StartHua/ComfyUI_Seg_VITON", + "files": [ + "https://github.com/StartHua/ComfyUI_Seg_VITON" + ], + "install_type": "git-clone", + "description": "Nodes:segformer_clothes, segformer_agnostic, segformer_remove_bg, stabel_vition. Nodes for model dress up." + }, + { + "author": "HaydenReeve", + "title": "ComfyUI Better Strings", + "reference": "https://github.com/HaydenReeve/ComfyUI-Better-Strings", + "files": [ + "https://github.com/HaydenReeve/ComfyUI-Better-Strings" + ], + "install_type": "git-clone", + "description": "Strings should be easy, and simple. This extension aims to provide a set of nodes that make working with strings in ComfyUI a little bit easier." + }, + { + "author": "Loewen-Hob", + "title": "Rembg Background Removal Node for ComfyUI", + "reference": "https://github.com/Loewen-Hob/rembg-comfyui-node-better", + "files": [ + "https://github.com/Loewen-Hob/rembg-comfyui-node-better" + ], + "install_type": "git-clone", + "description": "This custom node is based on the [a/rembg-comfyui-node](https://github.com/Jcd1230/rembg-comfyui-node) but provides additional functionality to select ONNX models." + }, + { + "author": "mape", + "title": "mape's ComfyUI Helpers", + "reference": "https://github.com/mape/ComfyUI-mape-Helpers", + "files": [ + "https://github.com/mape/ComfyUI-mape-Helpers" + ], + "install_type": "git-clone", + "description": "A project that combines all my qualify of life improvements for ComyUI. For more info visit: [a/https://comfyui.ma.pe/](https://comfyui.ma.pe/)" + }, + { + "author": "zhongpei", + "title": "Comfyui_image2prompt", + "reference": "https://github.com/zhongpei/Comfyui_image2prompt", + "files": [ + "https://github.com/zhongpei/Comfyui_image2prompt" + ], + "install_type": "git-clone", + "description": "Nodes:Image to Text, Loader Image to Text Model." + }, + { + "author": "jamal-alkharrat", + "title": "ComfyUI_rotate_image", + "reference": "https://github.com/jamal-alkharrat/ComfyUI_rotate_image", + "files": [ + "https://github.com/jamal-alkharrat/ComfyUI_rotate_image" + ], + "install_type": "git-clone", + "description": "ComfyUI Custom Node to Rotate Images, Img2Img node." + }, + { + "author": "JerryOrbachJr", + "title": "ComfyUI-RandomSize", + "reference": "https://github.com/JerryOrbachJr/ComfyUI-RandomSize", + "files": [ + "https://github.com/JerryOrbachJr/ComfyUI-RandomSize" + ], + "install_type": "git-clone", + "description": "A ComfyUI custom node that randomly selects a height and width pair from a list in a config file" + }, + { + "author": "blepping", + "title": "ComfyUI-bleh", + "reference": "https://github.com/blepping/ComfyUI-bleh", + "files": [ + "https://github.com/blepping/ComfyUI-bleh" + ], + "install_type": "git-clone", + "description": "Better TAESD previews, BlehHyperTile." + }, + { + "author": "yuvraj108c", + "title": "ComfyUI Whisper", + "reference": "https://github.com/yuvraj108c/ComfyUI-Whisper", + "files": [ + "https://github.com/yuvraj108c/ComfyUI-Whisper" + ], + "install_type": "git-clone", + "description": "Transcribe audio and add subtitles to videos using Whisper in ComfyUI" + }, + { + "author": "kijai", + "title": "ComfyUI-CCSR", + "reference": "https://github.com/kijai/ComfyUI-CCSR", + "files": [ + "https://github.com/kijai/ComfyUI-CCSR" + ], + "install_type": "git-clone", + "description": "ComfyUI- CCSR upscaler node" + }, + { + "author": "azure-dragon-ai", + "title": "ComfyUI-ClipScore-Nodes", + "reference": "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes", + "files": [ + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:ImageScore, Loader, Image Processor, Real Image Processor, Fake Image Processor, Text Processor. ComfyUI Nodes for ClipScore" + }, + { + "author": "Hiero207", + "title": "ComfyUI-Hiero-Nodes", + "reference": "https://github.com/Hiero207/ComfyUI-Hiero-Nodes", + "files": [ + "https://github.com/Hiero207/ComfyUI-Hiero-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Post to Discord w/ Webhook" + }, + { + "author": "azure-dragon-ai", + "title": "ComfyUI-ClipScore-Nodes", + "reference": "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes", + "files": [ + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes" + ], + "install_type": "git-clone", + "description": "ComfyUI Nodes for ClipScore" + }, + { + "author": "godspede", + "title": "ComfyUI Substring", + "reference": "https://github.com/godspede/ComfyUI_Substring", + "files": [ + "https://github.com/godspede/ComfyUI_Substring" + ], + "install_type": "git-clone", + "description": "Just a simple substring node that takes text and length as input, and outputs the first length characters." + }, + { + "author": "gokayfem", + "title": "VLM_nodes", + "reference": "https://github.com/gokayfem/ComfyUI_VLM_nodes", + "files": [ + "https://github.com/gokayfem/ComfyUI_VLM_nodes" + ], + "install_type": "git-clone", + "description": "Nodes:VisionQuestionAnswering Node, PromptGenerate Node" + }, + { + "author": "godspede", + "title": "ComfyUI Substring", + "reference": "https://github.com/godspede/ComfyUI_Substring", + "files": [ + "https://github.com/godspede/ComfyUI_Substring" + ], + "install_type": "git-clone", + "description": "Just a simple substring node that takes text and length as input, and outputs the first length characters." + } + ] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/new/extension-node-map.json b/custom_nodes/ComfyUI-Manager/node_db/new/extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..06eb608e9a068fb14f976af858a10ae0dc43bec2 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/new/extension-node-map.json @@ -0,0 +1,8424 @@ +{ + "https://gist.github.com/alkemann/7361b8eb966f29c8238fd323409efb68/raw/f9605be0b38d38d3e3a2988f89248ff557010076/alkemann.py": [ + [ + "Int to Text", + "Save A1 Image", + "Seed With Text" + ], + { + "title_aux": "alkemann nodes" + } + ], + "https://git.mmaker.moe/mmaker/sd-webui-color-enhance": [ + [ + "MMakerColorBlend", + "MMakerColorEnhance" + ], + { + "title_aux": "Color Enhance" + } + ], + "https://github.com/0xbitches/ComfyUI-LCM": [ + [ + "LCM_Sampler", + "LCM_Sampler_Advanced", + "LCM_img2img_Sampler", + "LCM_img2img_Sampler_Advanced" + ], + { + "title_aux": "Latent Consistency Model for ComfyUI" + } + ], + "https://github.com/1shadow1/hayo_comfyui_nodes/raw/main/LZCNodes.py": [ + [ + "LoadPILImages", + "MergeImages", + "make_transparentmask", + "tensor_trans_pil", + "words_generatee" + ], + { + "title_aux": "Hayo comfyui nodes" + } + ], + "https://github.com/42lux/ComfyUI-safety-checker": [ + [ + "Safety Checker" + ], + { + "title_aux": "ComfyUI-safety-checker" + } + ], + "https://github.com/54rt1n/ComfyUI-DareMerge": [ + [ + "DM_AdvancedDareModelMerger", + "DM_AdvancedModelMerger", + "DM_AttentionGradient", + "DM_BlockGradient", + "DM_BlockModelMerger", + "DM_DareClipMerger", + "DM_DareModelMergerBlock", + "DM_DareModelMergerElement", + "DM_DareModelMergerMBW", + "DM_GradientEdit", + "DM_GradientOperations", + "DM_GradientReporting", + "DM_InjectNoise", + "DM_LoRALoaderTags", + "DM_LoRAReporting", + "DM_MBWGradient", + "DM_MagnitudeMasker", + "DM_MaskEdit", + "DM_MaskOperations", + "DM_MaskReporting", + "DM_ModelReporting", + "DM_NormalizeModel", + "DM_QuadMasker", + "DM_ShellGradient", + "DM_SimpleMasker" + ], + { + "title_aux": "ComfyUI-DareMerge" + } + ], + "https://github.com/80sVectorz/ComfyUI-Static-Primitives": [ + [ + "FloatStaticPrimitive", + "IntStaticPrimitive", + "StringMlStaticPrimitive", + "StringStaticPrimitive" + ], + { + "title_aux": "ComfyUI-Static-Primitives" + } + ], + "https://github.com/AInseven/ComfyUI-fastblend": [ + [ + "FillDarkMask", + "InterpolateKeyFrame", + "MaskListcaptoBatch", + "MyOpenPoseNode", + "SmoothVideo", + "reBatchImage" + ], + { + "title_aux": "ComfyUI-fastblend" + } + ], + "https://github.com/AIrjen/OneButtonPrompt": [ + [ + "AutoNegativePrompt", + "CreatePromptVariant", + "OneButtonPreset", + "OneButtonPrompt", + "SavePromptToFile" + ], + { + "title_aux": "One Button Prompt" + } + ], + "https://github.com/AbdullahAlfaraj/Comfy-Photoshop-SD": [ + [ + "APS_LatentBatch", + "APS_Seed", + "ContentMaskLatent", + "ControlNetScript", + "ControlnetUnit", + "GaussianLatentImage", + "GetConfig", + "LoadImageBase64", + "LoadImageWithMetaData", + "LoadLorasFromPrompt", + "MaskExpansion" + ], + { + "title_aux": "Comfy-Photoshop-SD" + } + ], + "https://github.com/AbyssYuan0/ComfyUI_BadgerTools": [ + [ + "ApplyMaskToImage-badger", + "CropImageByMask-badger", + "ExpandImageWithColor-badger", + "FindThickLinesFromCanny-badger", + "FloatToInt-badger", + "FloatToString-badger", + "FrameToVideo-badger", + "GarbageCollect-badger", + "GetColorFromBorder-badger", + "GetDirName-badger", + "GetUUID-badger", + "IdentifyBorderColorToMask-badger", + "IdentifyColorToMask-badger", + "ImageNormalization-badger", + "ImageOverlap-badger", + "ImageScaleToSide-badger", + "IntToString-badger", + "SegmentToMaskByPoint-badger", + "StringToFizz-badger", + "TextListToString-badger", + "TrimTransparentEdges-badger", + "VideoCutFromDir-badger", + "VideoToFrame-badger", + "deleteDir-badger", + "findCenterOfMask-badger", + "getImageSide-badger", + "getParentDir-badger", + "mkdir-badger" + ], + { + "title_aux": "ComfyUI_BadgerTools" + } + ], + "https://github.com/Acly/comfyui-inpaint-nodes": [ + [ + "INPAINT_ApplyFooocusInpaint", + "INPAINT_InpaintWithModel", + "INPAINT_LoadFooocusInpaint", + "INPAINT_LoadInpaintModel", + "INPAINT_MaskedBlur", + "INPAINT_MaskedFill", + "INPAINT_VAEEncodeInpaintConditioning" + ], + { + "title_aux": "ComfyUI Inpaint Nodes" + } + ], + "https://github.com/Acly/comfyui-tooling-nodes": [ + [ + "ETN_ApplyMaskToImage", + "ETN_CropImage", + "ETN_LoadImageBase64", + "ETN_LoadMaskBase64", + "ETN_SendImageWebSocket" + ], + { + "title_aux": "ComfyUI Nodes for External Tooling" + } + ], + "https://github.com/Amorano/Jovimetrix": [ + [], + { + "author": "amorano", + "description": "Webcams, GLSL shader, Media Streaming, Tick animation, Image manipulation,", + "nodename_pattern": " \\(jov\\)$", + "title": "Jovimetrix", + "title_aux": "Jovimetrix Composition Nodes" + } + ], + "https://github.com/ArtBot2023/CharacterFaceSwap": [ + [ + "Color Blend", + "Crop Face", + "Exclude Facial Feature", + "Generation Parameter Input", + "Generation Parameter Output", + "Image Full BBox", + "Load BiseNet", + "Load RetinaFace", + "Mask Contour", + "Segment Face", + "Uncrop Face" + ], + { + "title_aux": "Character Face Swap" + } + ], + "https://github.com/ArtVentureX/comfyui-animatediff": [ + [ + "AnimateDiffCombine", + "AnimateDiffLoraLoader", + "AnimateDiffModuleLoader", + "AnimateDiffSampler", + "AnimateDiffSlidingWindowOptions", + "ImageSizeAndBatchSize", + "LoadVideo" + ], + { + "title_aux": "AnimateDiff" + } + ], + "https://github.com/AustinMroz/ComfyUI-SpliceTools": [ + [ + "LogSigmas", + "RerangeSigmas", + "SpliceDenoised", + "SpliceLatents", + "TemporalSplice" + ], + { + "title_aux": "SpliceTools" + } + ], + "https://github.com/BadCafeCode/masquerade-nodes-comfyui": [ + [ + "Blur", + "Change Channel Count", + "Combine Masks", + "Constant Mask", + "Convert Color Space", + "Create QR Code", + "Create Rect Mask", + "Cut By Mask", + "Get Image Size", + "Image To Mask", + "Make Image Batch", + "Mask By Text", + "Mask Morphology", + "Mask To Region", + "MasqueradeIncrementer", + "Mix Color By Mask", + "Mix Images By Mask", + "Paste By Mask", + "Prune By Mask", + "Separate Mask Components", + "Unary Image Op", + "Unary Mask Op" + ], + { + "title_aux": "Masquerade Nodes" + } + ], + "https://github.com/Beinsezii/bsz-cui-extras": [ + [ + "BSZAbsoluteHires", + "BSZAspectHires", + "BSZColoredLatentImageXL", + "BSZCombinedHires", + "BSZHueChromaXL", + "BSZInjectionKSampler", + "BSZLatentDebug", + "BSZLatentFill", + "BSZLatentGradient", + "BSZLatentHSVAImage", + "BSZLatentOffsetXL", + "BSZLatentRGBAImage", + "BSZLatentbuster", + "BSZPixelbuster", + "BSZPixelbusterHelp", + "BSZPrincipledConditioning", + "BSZPrincipledSampler", + "BSZPrincipledScale", + "BSZStrangeResample" + ], + { + "title_aux": "bsz-cui-extras" + } + ], + "https://github.com/BennyKok/comfyui-deploy": [ + [ + "ComfyUIDeployExternalCheckpoint", + "ComfyUIDeployExternalImage", + "ComfyUIDeployExternalImageAlpha", + "ComfyUIDeployExternalLora", + "ComfyUIDeployExternalNumber", + "ComfyUIDeployExternalNumberInt", + "ComfyUIDeployExternalText" + ], + { + "author": "BennyKok", + "description": "", + "nickname": "Comfy Deploy", + "title": "comfyui-deploy", + "title_aux": "ComfyUI Deploy" + } + ], + "https://github.com/Bikecicle/ComfyUI-Waveform-Extensions/raw/main/EXT_AudioManipulation.py": [ + [ + "BatchJoinAudio", + "CutAudio", + "DuplicateAudio", + "JoinAudio", + "ResampleAudio", + "ReverseAudio", + "StretchAudio" + ], + { + "title_aux": "Waveform Extensions" + } + ], + "https://github.com/Billius-AI/ComfyUI-Path-Helper": [ + [ + "Add File Name Prefix", + "Add File Name Prefix Advanced", + "Add Folder", + "Add Folder Advanced", + "Create Project Root", + "Join Variables", + "Show Path", + "Show String" + ], + { + "title_aux": "ComfyUI-Path-Helper" + } + ], + "https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb": [ + [ + "BNK_AddCLIPSDXLParams", + "BNK_AddCLIPSDXLRParams", + "BNK_CLIPTextEncodeAdvanced", + "BNK_CLIPTextEncodeSDXLAdvanced" + ], + { + "title_aux": "Advanced CLIP Text Encode" + } + ], + "https://github.com/BlenderNeko/ComfyUI_Cutoff": [ + [ + "BNK_CutoffBasePrompt", + "BNK_CutoffRegionsToConditioning", + "BNK_CutoffRegionsToConditioning_ADV", + "BNK_CutoffSetRegions" + ], + { + "title_aux": "ComfyUI Cutoff" + } + ], + "https://github.com/BlenderNeko/ComfyUI_Noise": [ + [ + "BNK_DuplicateBatchIndex", + "BNK_GetSigma", + "BNK_InjectNoise", + "BNK_NoisyLatentImage", + "BNK_SlerpLatent", + "BNK_Unsampler" + ], + { + "title_aux": "ComfyUI Noise" + } + ], + "https://github.com/BlenderNeko/ComfyUI_SeeCoder": [ + [ + "ConcatConditioning", + "SEECoderImageEncode" + ], + { + "title_aux": "SeeCoder [WIP]" + } + ], + "https://github.com/BlenderNeko/ComfyUI_TiledKSampler": [ + [ + "BNK_TiledKSampler", + "BNK_TiledKSamplerAdvanced" + ], + { + "title_aux": "Tiled sampling for ComfyUI" + } + ], + "https://github.com/CYBERLOOM-INC/ComfyUI-nodes-hnmr": [ + [ + "CLIPIter", + "Dict2Model", + "GridImage", + "ImageBlend2", + "KSamplerOverrided", + "KSamplerSetting", + "KSamplerXYZ", + "LatentToHist", + "LatentToImage", + "ModelIter", + "RandomLatentImage", + "SaveStateDict", + "SaveText", + "StateDictLoader", + "StateDictMerger", + "StateDictMergerBlockWeighted", + "StateDictMergerBlockWeightedMulti", + "VAEDecodeBatched", + "VAEEncodeBatched", + "VAEIter" + ], + { + "title_aux": "ComfyUI-nodes-hnmr" + } + ], + "https://github.com/CaptainGrock/ComfyUIInvisibleWatermark/raw/main/Invisible%20Watermark.py": [ + [ + "Apply Invisible Watermark", + "Extract Watermark" + ], + { + "title_aux": "ComfyUIInvisibleWatermark" + } + ], + "https://github.com/Chan-0312/ComfyUI-IPAnimate": [ + [ + "IPAdapterAnimate" + ], + { + "title_aux": "ComfyUI-IPAnimate" + } + ], + "https://github.com/Chaoses-Ib/ComfyUI_Ib_CustomNodes": [ + [ + "ImageToPIL", + "LoadImageFromPath", + "PILToImage", + "PILToMask" + ], + { + "title_aux": "ComfyUI_Ib_CustomNodes" + } + ], + "https://github.com/Clybius/ComfyUI-Extra-Samplers": [ + [ + "SamplerCLYB_4M_SDE_Momentumized", + "SamplerCustomModelMixtureDuo", + "SamplerCustomNoise", + "SamplerCustomNoiseDuo", + "SamplerDPMPP_DualSDE_Momentumized", + "SamplerEulerAncestralDancing_Experimental", + "SamplerLCMCustom", + "SamplerRES_Momentumized", + "SamplerTTM" + ], + { + "title_aux": "ComfyUI Extra Samplers" + } + ], + "https://github.com/Clybius/ComfyUI-Latent-Modifiers": [ + [ + "Latent Diffusion Mega Modifier" + ], + { + "title_aux": "ComfyUI-Latent-Modifiers" + } + ], + "https://github.com/CosmicLaca/ComfyUI_Primere_Nodes": [ + [ + "PrimereAnyDetailer", + "PrimereAnyOutput", + "PrimereCKPT", + "PrimereCKPTLoader", + "PrimereCLIPEncoder", + "PrimereClearPrompt", + "PrimereDynamicParser", + "PrimereEmbedding", + "PrimereEmbeddingHandler", + "PrimereEmbeddingKeywordMerger", + "PrimereEmotionsStyles", + "PrimereHypernetwork", + "PrimereImageSegments", + "PrimereKSampler", + "PrimereLCMSelector", + "PrimereLORA", + "PrimereLYCORIS", + "PrimereLatentNoise", + "PrimereLoraKeywordMerger", + "PrimereLoraStackMerger", + "PrimereLycorisKeywordMerger", + "PrimereLycorisStackMerger", + "PrimereMetaCollector", + "PrimereMetaRead", + "PrimereMetaSave", + "PrimereMidjourneyStyles", + "PrimereModelConceptSelector", + "PrimereModelKeyword", + "PrimereNetworkTagLoader", + "PrimerePrompt", + "PrimerePromptSwitch", + "PrimereRefinerPrompt", + "PrimereResolution", + "PrimereResolutionMultiplier", + "PrimereResolutionMultiplierMPX", + "PrimereSamplers", + "PrimereSamplersSteps", + "PrimereSeed", + "PrimereStepsCfg", + "PrimereStyleLoader", + "PrimereStylePile", + "PrimereTextOutput", + "PrimereVAE", + "PrimereVAELoader", + "PrimereVAESelector", + "PrimereVisualCKPT", + "PrimereVisualEmbedding", + "PrimereVisualHypernetwork", + "PrimereVisualLORA", + "PrimereVisualLYCORIS", + "PrimereVisualStyle" + ], + { + "title_aux": "Primere nodes for ComfyUI" + } + ], + "https://github.com/Danand/ComfyUI-ComfyCouple": [ + [ + "Attention couple", + "Comfy Couple" + ], + { + "author": "Rei D.", + "description": "If you want to draw two different characters together without blending their features, so you could try to check out this custom node.", + "nickname": "Danand", + "title": "Comfy Couple", + "title_aux": "ComfyUI-ComfyCouple" + } + ], + "https://github.com/Davemane42/ComfyUI_Dave_CustomNode": [ + [ + "ABGRemover", + "ConditioningStretch", + "ConditioningUpscale", + "MultiAreaConditioning", + "MultiLatentComposite" + ], + { + "title_aux": "Visual Area Conditioning / Latent composition" + } + ], + "https://github.com/Derfuu/Derfuu_ComfyUI_ModdedNodes": [ + [ + "ABSNode_DF", + "Absolute value", + "Ceil", + "CeilNode_DF", + "Conditioning area scale by ratio", + "ConditioningSetArea with tuples", + "ConditioningSetAreaEXT_DF", + "ConditioningSetArea_DF", + "CosNode_DF", + "Cosines", + "Divide", + "DivideNode_DF", + "EmptyLatentImage_DF", + "Float", + "Float debug print", + "Float2Tuple_DF", + "FloatDebugPrint_DF", + "FloatNode_DF", + "Floor", + "FloorNode_DF", + "Get image size", + "Get latent size", + "GetImageSize_DF", + "GetLatentSize_DF", + "Image scale by ratio", + "Image scale to side", + "ImageScale_Ratio_DF", + "ImageScale_Side_DF", + "Int debug print", + "Int to float", + "Int to tuple", + "Int2Float_DF", + "IntDebugPrint_DF", + "Integer", + "IntegerNode_DF", + "Latent Scale by ratio", + "Latent Scale to side", + "LatentComposite with tuples", + "LatentScale_Ratio_DF", + "LatentScale_Side_DF", + "MultilineStringNode_DF", + "Multiply", + "MultiplyNode_DF", + "PowNode_DF", + "Power", + "Random", + "RandomFloat_DF", + "SinNode_DF", + "Sinus", + "SqrtNode_DF", + "Square root", + "String debug print", + "StringNode_DF", + "Subtract", + "SubtractNode_DF", + "Sum", + "SumNode_DF", + "TanNode_DF", + "Tangent", + "Text", + "Text box", + "Tuple", + "Tuple debug print", + "Tuple multiply", + "Tuple swap", + "Tuple to floats", + "Tuple to ints", + "Tuple2Float_DF", + "TupleDebugPrint_DF", + "TupleNode_DF" + ], + { + "title_aux": "Derfuu_ComfyUI_ModdedNodes" + } + ], + "https://github.com/DonBaronFactory/ComfyUI-Cre8it-Nodes": [ + [ + "ApplySerialPrompter", + "ImageSizer", + "SerialPrompter" + ], + { + "author": "CRE8IT GmbH", + "description": "This extension offers various nodes.", + "nickname": "cre8Nodes", + "title": "cr8SerialPrompter", + "title_aux": "ComfyUI-Cre8it-Nodes" + } + ], + "https://github.com/Electrofried/ComfyUI-OpenAINode": [ + [ + "OpenAINode" + ], + { + "title_aux": "OpenAINode" + } + ], + "https://github.com/EllangoK/ComfyUI-post-processing-nodes": [ + [ + "ArithmeticBlend", + "AsciiArt", + "Blend", + "Blur", + "CannyEdgeMask", + "ChromaticAberration", + "ColorCorrect", + "ColorTint", + "Dissolve", + "Dither", + "DodgeAndBurn", + "FilmGrain", + "Glow", + "HSVThresholdMask", + "KMeansQuantize", + "KuwaharaBlur", + "Parabolize", + "PencilSketch", + "PixelSort", + "Pixelize", + "Quantize", + "Sharpen", + "SineWave", + "Solarize", + "Vignette" + ], + { + "title_aux": "ComfyUI-post-processing-nodes" + } + ], + "https://github.com/Extraltodeus/ComfyUI-AutomaticCFG": [ + [ + "Automatic CFG", + "Automatic CFG channels multipliers" + ], + { + "title_aux": "ComfyUI-AutomaticCFG" + } + ], + "https://github.com/Extraltodeus/LoadLoraWithTags": [ + [ + "LoraLoaderTagsQuery" + ], + { + "title_aux": "LoadLoraWithTags" + } + ], + "https://github.com/Extraltodeus/noise_latent_perlinpinpin": [ + [ + "NoisyLatentPerlin" + ], + { + "title_aux": "noise latent perlinpinpin" + } + ], + "https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler": [ + [ + "Get sigmas as float", + "Graph sigmas", + "Manual scheduler", + "Merge sigmas by average", + "Merge sigmas gradually", + "Multiply sigmas", + "Split and concatenate sigmas", + "The Golden Scheduler" + ], + { + "title_aux": "sigmas_tools_and_the_golden_scheduler" + } + ], + "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation": [ + [ + "AMT VFI", + "CAIN VFI", + "EISAI VFI", + "FILM VFI", + "FLAVR VFI", + "GMFSS Fortuna VFI", + "IFRNet VFI", + "IFUnet VFI", + "KSampler Gradually Adding More Denoise (efficient)", + "M2M VFI", + "Make Interpolation State List", + "RIFE VFI", + "STMFNet VFI", + "Sepconv VFI" + ], + { + "title_aux": "ComfyUI Frame Interpolation" + } + ], + "https://github.com/Fannovel16/ComfyUI-Loopchain": [ + [ + "EmptyLatentImageLoop", + "FolderToImageStorage", + "ImageStorageExportLoop", + "ImageStorageImport", + "ImageStorageReset", + "LatentStorageExportLoop", + "LatentStorageImport", + "LatentStorageReset" + ], + { + "title_aux": "ComfyUI Loopchain" + } + ], + "https://github.com/Fannovel16/ComfyUI-MotionDiff": [ + [ + "EmptyMotionData", + "ExportSMPLTo3DSoftware", + "MotionCLIPTextEncode", + "MotionDataVisualizer", + "MotionDiffLoader", + "MotionDiffSimpleSampler", + "RenderSMPLMesh", + "SMPLLoader", + "SaveSMPL", + "SmplifyMotionData" + ], + { + "title_aux": "ComfyUI MotionDiff" + } + ], + "https://github.com/Fannovel16/ComfyUI-Video-Matting": [ + [ + "BRIAAI Matting", + "Robust Video Matting" + ], + { + "title_aux": "ComfyUI-Video-Matting" + } + ], + "https://github.com/Fannovel16/comfyui_controlnet_aux": [ + [ + "AIO_Preprocessor", + "AnimalPosePreprocessor", + "AnimeFace_SemSegPreprocessor", + "AnimeLineArtPreprocessor", + "BAE-NormalMapPreprocessor", + "BinaryPreprocessor", + "CannyEdgePreprocessor", + "ColorPreprocessor", + "DWPreprocessor", + "DensePosePreprocessor", + "DepthAnythingPreprocessor", + "DiffusionEdge_Preprocessor", + "FacialPartColoringFromPoseKps", + "FakeScribblePreprocessor", + "HEDPreprocessor", + "HintImageEnchance", + "ImageGenResolutionFromImage", + "ImageGenResolutionFromLatent", + "ImageIntensityDetector", + "ImageLuminanceDetector", + "InpaintPreprocessor", + "LeReS-DepthMapPreprocessor", + "LineArtPreprocessor", + "LineartStandardPreprocessor", + "M-LSDPreprocessor", + "Manga2Anime_LineArt_Preprocessor", + "MaskOptFlow", + "MediaPipe-FaceMeshPreprocessor", + "MeshGraphormer-DepthMapPreprocessor", + "MiDaS-DepthMapPreprocessor", + "MiDaS-NormalMapPreprocessor", + "OneFormer-ADE20K-SemSegPreprocessor", + "OneFormer-COCO-SemSegPreprocessor", + "OpenposePreprocessor", + "PiDiNetPreprocessor", + "PixelPerfectResolution", + "SAMPreprocessor", + "SavePoseKpsAsJsonFile", + "ScribblePreprocessor", + "Scribble_XDoG_Preprocessor", + "SemSegPreprocessor", + "ShufflePreprocessor", + "TEEDPreprocessor", + "TilePreprocessor", + "UniFormer-SemSegPreprocessor", + "Unimatch_OptFlowPreprocessor", + "Zoe-DepthMapPreprocessor", + "Zoe_DepthAnythingPreprocessor" + ], + { + "author": "tstandley", + "title_aux": "ComfyUI's ControlNet Auxiliary Preprocessors" + } + ], + "https://github.com/Feidorian/feidorian-ComfyNodes": [ + [], + { + "nodename_pattern": "^Feidorian_", + "title_aux": "feidorian-ComfyNodes" + } + ], + "https://github.com/Fictiverse/ComfyUI_Fictiverse": [ + [ + "Add Noise to Image with Mask", + "Color correction", + "Displace Image with Depth", + "Displace Images with Mask", + "Zoom Image with Depth" + ], + { + "title_aux": "ComfyUI Fictiverse Nodes" + } + ], + "https://github.com/FizzleDorf/ComfyUI-AIT": [ + [ + "AIT_Unet_Loader", + "AIT_VAE_Encode_Loader" + ], + { + "title_aux": "ComfyUI-AIT" + } + ], + "https://github.com/FizzleDorf/ComfyUI_FizzNodes": [ + [ + "AbsCosWave", + "AbsSinWave", + "BatchGLIGENSchedule", + "BatchPromptSchedule", + "BatchPromptScheduleEncodeSDXL", + "BatchPromptScheduleLatentInput", + "BatchPromptScheduleNodeFlowEnd", + "BatchPromptScheduleSDXLLatentInput", + "BatchStringSchedule", + "BatchValueSchedule", + "BatchValueScheduleLatentInput", + "CalculateFrameOffset", + "ConcatStringSingle", + "CosWave", + "FizzFrame", + "FizzFrameConcatenate", + "ImageBatchFromValueSchedule", + "Init FizzFrame", + "InvCosWave", + "InvSinWave", + "Lerp", + "PromptSchedule", + "PromptScheduleEncodeSDXL", + "PromptScheduleNodeFlow", + "PromptScheduleNodeFlowEnd", + "SawtoothWave", + "SinWave", + "SquareWave", + "StringConcatenate", + "StringSchedule", + "TriangleWave", + "ValueSchedule", + "convertKeyframeKeysToBatchKeys" + ], + { + "title_aux": "FizzNodes" + } + ], + "https://github.com/FlyingFireCo/tiled_ksampler": [ + [ + "Asymmetric Tiled KSampler", + "Circular VAEDecode", + "Tiled KSampler" + ], + { + "title_aux": "tiled_ksampler" + } + ], + "https://github.com/Franck-Demongin/NX_PromptStyler": [ + [ + "NX_PromptStyler" + ], + { + "title_aux": "NX_PromptStyler" + } + ], + "https://github.com/GMapeSplat/ComfyUI_ezXY": [ + [ + "ConcatenateString", + "ItemFromDropdown", + "IterationDriver", + "JoinImages", + "LineToConsole", + "NumberFromList", + "NumbersToList", + "PlotImages", + "StringFromList", + "StringToLabel", + "StringsToList", + "ezMath", + "ezXY_AssemblePlot", + "ezXY_Driver" + ], + { + "title_aux": "ezXY scripts and nodes" + } + ], + "https://github.com/GTSuya-Studio/ComfyUI-Gtsuya-Nodes": [ + [ + "Danbooru (ID)", + "Danbooru (Random)", + "Random File From Path", + "Replace Strings", + "Simple Wildcards", + "Simple Wildcards (Dir.)", + "Wildcards Nodes" + ], + { + "title_aux": "ComfyUI-GTSuya-Nodes" + } + ], + "https://github.com/GavChap/ComfyUI-CascadeResolutions": [ + [ + "CascadeResolutions" + ], + { + "title_aux": "ComfyUI-CascadeResolutions" + } + ], + "https://github.com/Gourieff/comfyui-reactor-node": [ + [ + "ReActorFaceSwap", + "ReActorLoadFaceModel", + "ReActorRestoreFace", + "ReActorSaveFaceModel" + ], + { + "title_aux": "ReActor Node for ComfyUI" + } + ], + "https://github.com/HAL41/ComfyUI-aichemy-nodes": [ + [ + "aichemyYOLOv8Segmentation" + ], + { + "title_aux": "ComfyUI aichemy nodes" + } + ], + "https://github.com/Hangover3832/ComfyUI-Hangover-Moondream": [ + [ + "Moondream Interrogator (NO COMMERCIAL USE)" + ], + { + "title_aux": "ComfyUI-Hangover-Moondream" + } + ], + "https://github.com/Hangover3832/ComfyUI-Hangover-Nodes": [ + [ + "Image Scale Bounding Box", + "MS kosmos-2 Interrogator", + "Make Inpaint Model", + "Save Image w/o Metadata" + ], + { + "title_aux": "ComfyUI-Hangover-Nodes" + } + ], + "https://github.com/Haoming02/comfyui-diffusion-cg": [ + [ + "Normalization", + "NormalizationXL", + "Recenter", + "Recenter XL" + ], + { + "title_aux": "ComfyUI Diffusion Color Grading" + } + ], + "https://github.com/Haoming02/comfyui-floodgate": [ + [ + "FloodGate" + ], + { + "title_aux": "ComfyUI Floodgate" + } + ], + "https://github.com/HaydenReeve/ComfyUI-Better-Strings": [ + [ + "BetterString" + ], + { + "title_aux": "ComfyUI Better Strings" + } + ], + "https://github.com/HebelHuber/comfyui-enhanced-save-node": [ + [ + "EnhancedSaveNode" + ], + { + "title_aux": "comfyui-enhanced-save-node" + } + ], + "https://github.com/Hiero207/ComfyUI-Hiero-Nodes": [ + [ + "Post to Discord w/ Webhook" + ], + { + "author": "Hiero", + "description": "Just some nodes that I wanted/needed, so I made them.", + "nickname": "HNodes", + "title": "Hiero-Nodes", + "title_aux": "ComfyUI-Hiero-Nodes" + } + ], + "https://github.com/IDGallagher/ComfyUI-IG-Nodes": [ + [ + "IG Analyze SSIM", + "IG Cross Fade Images", + "IG Explorer", + "IG Float", + "IG Folder", + "IG Int", + "IG Load Image", + "IG Load Images", + "IG Multiply", + "IG Path Join", + "IG String", + "IG ZFill" + ], + { + "author": "IDGallagher", + "description": "Custom nodes to aid in the exploration of Latent Space", + "nickname": "IG Interpolation Nodes", + "title": "IG Interpolation Nodes", + "title_aux": "IG Interpolation Nodes" + } + ], + "https://github.com/Inzaniak/comfyui-ranbooru": [ + [ + "PromptBackground", + "PromptLimit", + "PromptMix", + "PromptRandomWeight", + "PromptRemove", + "Ranbooru", + "RanbooruURL", + "RandomPicturePath" + ], + { + "title_aux": "Ranbooru for ComfyUI" + } + ], + "https://github.com/JPS-GER/ComfyUI_JPS-Nodes": [ + [ + "Conditioning Switch (JPS)", + "ControlNet Switch (JPS)", + "Crop Image Pipe (JPS)", + "Crop Image Settings (JPS)", + "Crop Image Square (JPS)", + "Crop Image TargetSize (JPS)", + "CtrlNet CannyEdge Pipe (JPS)", + "CtrlNet CannyEdge Settings (JPS)", + "CtrlNet MiDaS Pipe (JPS)", + "CtrlNet MiDaS Settings (JPS)", + "CtrlNet OpenPose Pipe (JPS)", + "CtrlNet OpenPose Settings (JPS)", + "CtrlNet ZoeDepth Pipe (JPS)", + "CtrlNet ZoeDepth Settings (JPS)", + "Disable Enable Switch (JPS)", + "Enable Disable Switch (JPS)", + "Generation TXT IMG Settings (JPS)", + "Get Date Time String (JPS)", + "Get Image Size (JPS)", + "IP Adapter Settings (JPS)", + "IP Adapter Settings Pipe (JPS)", + "IP Adapter Single Settings (JPS)", + "IP Adapter Single Settings Pipe (JPS)", + "IPA Switch (JPS)", + "Image Switch (JPS)", + "ImageToImage Pipe (JPS)", + "ImageToImage Settings (JPS)", + "Images Masks MultiPipe (JPS)", + "Integer Switch (JPS)", + "Largest Int (JPS)", + "Latent Switch (JPS)", + "Lora Loader (JPS)", + "Mask Switch (JPS)", + "Model Switch (JPS)", + "Multiply Float Float (JPS)", + "Multiply Int Float (JPS)", + "Multiply Int Int (JPS)", + "Resolution Multiply (JPS)", + "Revision Settings (JPS)", + "Revision Settings Pipe (JPS)", + "SDXL Basic Settings (JPS)", + "SDXL Basic Settings Pipe (JPS)", + "SDXL Fundamentals MultiPipe (JPS)", + "SDXL Prompt Handling (JPS)", + "SDXL Prompt Handling Plus (JPS)", + "SDXL Prompt Styler (JPS)", + "SDXL Recommended Resolution Calc (JPS)", + "SDXL Resolutions (JPS)", + "Sampler Scheduler Settings (JPS)", + "Save Images Plus (JPS)", + "Substract Int Int (JPS)", + "Text Concatenate (JPS)", + "Text Prompt (JPS)", + "VAE Switch (JPS)" + ], + { + "author": "JPS", + "description": "Various nodes to handle SDXL Resolutions, SDXL Basic Settings, IP Adapter Settings, Revision Settings, SDXL Prompt Styler, Crop Image to Square, Crop Image to Target Size, Get Date-Time String, Resolution Multiply, Largest Integer, 5-to-1 Switches for Integer, Images, Latents, Conditioning, Model, VAE, ControlNet", + "nickname": "JPS Custom Nodes", + "title": "JPS Custom Nodes for ComfyUI", + "title_aux": "JPS Custom Nodes for ComfyUI" + } + ], + "https://github.com/JaredTherriault/ComfyUI-JNodes": [ + [ + "JNodes_AddOrSetMetaDataKey", + "JNodes_AnyToString", + "JNodes_AppendReversedFrames", + "JNodes_BooleanSelectorWithString", + "JNodes_CheckpointSelectorWithString", + "JNodes_GetOutputDirectory", + "JNodes_GetParameterFromList", + "JNodes_GetParameterGlobal", + "JNodes_GetTempDirectory", + "JNodes_ImageFormatSelector", + "JNodes_ImageSizeSelector", + "JNodes_LoadVideo", + "JNodes_LoraExtractor", + "JNodes_OutVideoInfo", + "JNodes_ParseDynamicPrompts", + "JNodes_ParseParametersToGlobalList", + "JNodes_ParseWildcards", + "JNodes_PromptBuilderSingleSubject", + "JNodes_RemoveCommentedText", + "JNodes_RemoveMetaDataKey", + "JNodes_RemoveParseableDataForInference", + "JNodes_SamplerSelectorWithString", + "JNodes_SaveImageWithOutput", + "JNodes_SaveVideo", + "JNodes_SchedulerSelectorWithString", + "JNodes_SearchAndReplace", + "JNodes_SearchAndReplaceFromFile", + "JNodes_SearchAndReplaceFromList", + "JNodes_SetNegativePromptInMetaData", + "JNodes_SetPositivePromptInMetaData", + "JNodes_SplitAndJoin", + "JNodes_StringLiteral", + "JNodes_SyncedStringLiteral", + "JNodes_TokenCounter", + "JNodes_TrimAndStrip", + "JNodes_UploadVideo", + "JNodes_VaeSelectorWithString" + ], + { + "title_aux": "ComfyUI-JNodes" + } + ], + "https://github.com/JcandZero/ComfyUI_GLM4Node": [ + [ + "GLM3_turbo_CHAT", + "GLM4_CHAT", + "GLM4_Vsion_IMGURL" + ], + { + "title_aux": "ComfyUI_GLM4Node" + } + ], + "https://github.com/Jcd1230/rembg-comfyui-node": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Rembg Background Removal Node for ComfyUI" + } + ], + "https://github.com/JerryOrbachJr/ComfyUI-RandomSize": [ + [ + "JOJR_RandomSize" + ], + { + "author": "JerryOrbachJr", + "description": "A ComfyUI custom node that randomly selects a height and width pair from a list in a config file", + "nickname": "Random Size", + "title": "Random Size", + "title_aux": "ComfyUI-RandomSize" + } + ], + "https://github.com/Jordach/comfy-plasma": [ + [ + "JDC_AutoContrast", + "JDC_BlendImages", + "JDC_BrownNoise", + "JDC_Contrast", + "JDC_EqualizeGrey", + "JDC_GaussianBlur", + "JDC_GreyNoise", + "JDC_Greyscale", + "JDC_ImageLoader", + "JDC_ImageLoaderMeta", + "JDC_PinkNoise", + "JDC_Plasma", + "JDC_PlasmaSampler", + "JDC_PowerImage", + "JDC_RandNoise", + "JDC_ResizeFactor" + ], + { + "title_aux": "comfy-plasma" + } + ], + "https://github.com/Kaharos94/ComfyUI-Saveaswebp": [ + [ + "Save_as_webp" + ], + { + "title_aux": "ComfyUI-Saveaswebp" + } + ], + "https://github.com/Kangkang625/ComfyUI-paint-by-example": [ + [ + "PaintbyExamplePipeLoader", + "PaintbyExampleSampler" + ], + { + "title_aux": "ComfyUI-Paint-by-Example" + } + ], + "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet": [ + [ + "ACN_AdvancedControlNetApply", + "ACN_ControlNetLoaderWithLoraAdvanced", + "ACN_DefaultUniversalWeights", + "ACN_SparseCtrlIndexMethodNode", + "ACN_SparseCtrlLoaderAdvanced", + "ACN_SparseCtrlMergedLoaderAdvanced", + "ACN_SparseCtrlRGBPreprocessor", + "ACN_SparseCtrlSpreadMethodNode", + "ControlNetLoaderAdvanced", + "CustomControlNetWeights", + "CustomT2IAdapterWeights", + "DiffControlNetLoaderAdvanced", + "LatentKeyframe", + "LatentKeyframeBatchedGroup", + "LatentKeyframeGroup", + "LatentKeyframeTiming", + "LoadImagesFromDirectory", + "ScaledSoftControlNetWeights", + "ScaledSoftMaskedUniversalWeights", + "SoftControlNetWeights", + "SoftT2IAdapterWeights", + "TimestepKeyframe" + ], + { + "title_aux": "ComfyUI-Advanced-ControlNet" + } + ], + "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved": [ + [ + "ADE_AdjustPEFullStretch", + "ADE_AdjustPEManual", + "ADE_AdjustPESweetspotStretch", + "ADE_AnimateDiffCombine", + "ADE_AnimateDiffKeyframe", + "ADE_AnimateDiffLoRALoader", + "ADE_AnimateDiffLoaderGen1", + "ADE_AnimateDiffLoaderV1Advanced", + "ADE_AnimateDiffLoaderWithContext", + "ADE_AnimateDiffModelSettings", + "ADE_AnimateDiffModelSettingsAdvancedAttnStrengths", + "ADE_AnimateDiffModelSettingsSimple", + "ADE_AnimateDiffModelSettings_Release", + "ADE_AnimateDiffSamplingSettings", + "ADE_AnimateDiffSettings", + "ADE_AnimateDiffUniformContextOptions", + "ADE_AnimateDiffUnload", + "ADE_ApplyAnimateDiffModel", + "ADE_ApplyAnimateDiffModelSimple", + "ADE_BatchedContextOptions", + "ADE_CustomCFG", + "ADE_CustomCFGKeyframe", + "ADE_EmptyLatentImageLarge", + "ADE_IterationOptsDefault", + "ADE_IterationOptsFreeInit", + "ADE_LoadAnimateDiffModel", + "ADE_LoopedUniformContextOptions", + "ADE_LoopedUniformViewOptions", + "ADE_MaskedLoadLora", + "ADE_MultivalDynamic", + "ADE_MultivalScaledMask", + "ADE_NoiseLayerAdd", + "ADE_NoiseLayerAddWeighted", + "ADE_NoiseLayerReplace", + "ADE_RawSigmaSchedule", + "ADE_SigmaSchedule", + "ADE_SigmaScheduleSplitAndCombine", + "ADE_SigmaScheduleWeightedAverage", + "ADE_SigmaScheduleWeightedAverageInterp", + "ADE_StandardStaticContextOptions", + "ADE_StandardStaticViewOptions", + "ADE_StandardUniformContextOptions", + "ADE_StandardUniformViewOptions", + "ADE_UseEvolvedSampling", + "ADE_ViewsOnlyContextOptions", + "AnimateDiffLoaderV1", + "CheckpointLoaderSimpleWithNoiseSelect" + ], + { + "title_aux": "AnimateDiff Evolved" + } + ], + "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite": [ + [ + "VHS_BatchManager", + "VHS_DuplicateImages", + "VHS_DuplicateLatents", + "VHS_DuplicateMasks", + "VHS_GetImageCount", + "VHS_GetLatentCount", + "VHS_GetMaskCount", + "VHS_LoadAudio", + "VHS_LoadImages", + "VHS_LoadImagesPath", + "VHS_LoadVideo", + "VHS_LoadVideoPath", + "VHS_MergeImages", + "VHS_MergeLatents", + "VHS_MergeMasks", + "VHS_PruneOutputs", + "VHS_SelectEveryNthImage", + "VHS_SelectEveryNthLatent", + "VHS_SelectEveryNthMask", + "VHS_SplitImages", + "VHS_SplitLatents", + "VHS_SplitMasks", + "VHS_VAEDecodeBatched", + "VHS_VAEEncodeBatched", + "VHS_VideoCombine" + ], + { + "title_aux": "ComfyUI-VideoHelperSuite" + } + ], + "https://github.com/LEv145/images-grid-comfy-plugin": [ + [ + "GridAnnotation", + "ImageCombine", + "ImagesGridByColumns", + "ImagesGridByRows", + "LatentCombine" + ], + { + "title_aux": "ImagesGrid" + } + ], + "https://github.com/LarryJane491/Image-Captioning-in-ComfyUI": [ + [ + "LoRA Caption Load", + "LoRA Caption Save" + ], + { + "title_aux": "Image-Captioning-in-ComfyUI" + } + ], + "https://github.com/LarryJane491/Lora-Training-in-Comfy": [ + [ + "Lora Training in Comfy (Advanced)", + "Lora Training in ComfyUI", + "Tensorboard Access" + ], + { + "title_aux": "Lora-Training-in-Comfy" + } + ], + "https://github.com/Layer-norm/comfyui-lama-remover": [ + [ + "LamaRemover", + "LamaRemoverIMG" + ], + { + "title_aux": "Comfyui lama remover" + } + ], + "https://github.com/Lerc/canvas_tab": [ + [ + "Canvas_Tab", + "Send_To_Editor" + ], + { + "author": "Lerc", + "description": "This extension provides a full page image editor with mask support. There are two nodes, one to receive images from the editor and one to send images to the editor.", + "nickname": "Canvas Tab", + "title": "Canvas Tab", + "title_aux": "Canvas Tab" + } + ], + "https://github.com/Limitex/ComfyUI-Calculation": [ + [ + "CenterCalculation", + "CreateQRCode" + ], + { + "title_aux": "ComfyUI-Calculation" + } + ], + "https://github.com/Limitex/ComfyUI-Diffusers": [ + [ + "CreateIntListNode", + "DiffusersClipTextEncode", + "DiffusersModelMakeup", + "DiffusersPipelineLoader", + "DiffusersSampler", + "DiffusersSchedulerLoader", + "DiffusersVaeLoader", + "LcmLoraLoader", + "StreamDiffusionCreateStream", + "StreamDiffusionFastSampler", + "StreamDiffusionSampler", + "StreamDiffusionWarmup" + ], + { + "title_aux": "ComfyUI-Diffusers" + } + ], + "https://github.com/Loewen-Hob/rembg-comfyui-node-better": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Rembg Background Removal Node for ComfyUI" + } + ], + "https://github.com/LonicaMewinsky/ComfyUI-MakeFrame": [ + [ + "BreakFrames", + "BreakGrid", + "GetKeyFrames", + "MakeGrid", + "RandomImageFromDir" + ], + { + "title_aux": "ComfyBreakAnim" + } + ], + "https://github.com/LonicaMewinsky/ComfyUI-RawSaver": [ + [ + "SaveTifImage" + ], + { + "title_aux": "ComfyUI-RawSaver" + } + ], + "https://github.com/LyazS/comfyui-anime-seg": [ + [ + "Anime Character Seg" + ], + { + "title_aux": "Anime Character Segmentation node for comfyui" + } + ], + "https://github.com/M1kep/ComfyLiterals": [ + [ + "Checkpoint", + "Float", + "Int", + "KepStringLiteral", + "Lora", + "Operation", + "String" + ], + { + "title_aux": "ComfyLiterals" + } + ], + "https://github.com/M1kep/ComfyUI-KepOpenAI": [ + [ + "KepOpenAI_ImageWithPrompt" + ], + { + "title_aux": "ComfyUI-KepOpenAI" + } + ], + "https://github.com/M1kep/ComfyUI-OtherVAEs": [ + [ + "OtherVAE_Taesd" + ], + { + "title_aux": "ComfyUI-OtherVAEs" + } + ], + "https://github.com/M1kep/Comfy_KepKitchenSink": [ + [ + "KepRotateImage" + ], + { + "title_aux": "Comfy_KepKitchenSink" + } + ], + "https://github.com/M1kep/Comfy_KepListStuff": [ + [ + "Empty Images", + "Image Overlay", + "ImageListLoader", + "Join Float Lists", + "Join Image Lists", + "KepStringList", + "KepStringListFromNewline", + "Kep_JoinListAny", + "Kep_RepeatList", + "Kep_ReverseList", + "Kep_VariableImageBuilder", + "List Length", + "Range(Num Steps) - Float", + "Range(Num Steps) - Int", + "Range(Step) - Float", + "Range(Step) - Int", + "Stack Images", + "XYAny", + "XYImage" + ], + { + "title_aux": "Comfy_KepListStuff" + } + ], + "https://github.com/M1kep/Comfy_KepMatteAnything": [ + [ + "MatteAnything_DinoBoxes", + "MatteAnything_GenerateVITMatte", + "MatteAnything_InitSamPredictor", + "MatteAnything_LoadDINO", + "MatteAnything_LoadVITMatteModel", + "MatteAnything_SAMLoader", + "MatteAnything_SAMMaskFromBoxes", + "MatteAnything_ToTrimap" + ], + { + "title_aux": "Comfy_KepMatteAnything" + } + ], + "https://github.com/M1kep/KepPromptLang": [ + [ + "Build Gif", + "Special CLIP Loader" + ], + { + "title_aux": "KepPromptLang" + } + ], + "https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes": [ + [ + "Save Text File_mne" + ], + { + "title_aux": "ComfyUI-mnemic-nodes" + } + ], + "https://github.com/Mamaaaamooooo/batchImg-rembg-ComfyUI-nodes": [ + [ + "Image Remove Background (rembg)" + ], + { + "title_aux": "Batch Rembg for ComfyUI" + } + ], + "https://github.com/ManglerFTW/ComfyI2I": [ + [ + "Color Transfer", + "Combine and Paste", + "Inpaint Segments", + "Mask Ops" + ], + { + "author": "ManglerFTW", + "title": "ComfyI2I", + "title_aux": "ComfyI2I" + } + ], + "https://github.com/MarkoCa1/ComfyUI_Segment_Mask": [ + [ + "AutomaticMask(segment anything)" + ], + { + "title_aux": "ComfyUI_Segment_Mask" + } + ], + "https://github.com/Miosp/ComfyUI-FBCNN": [ + [ + "JPEG artifacts removal FBCNN" + ], + { + "title_aux": "ComfyUI-FBCNN" + } + ], + "https://github.com/MitoshiroPJ/comfyui_slothful_attention": [ + [ + "NearSightedAttention", + "NearSightedAttentionSimple", + "NearSightedTile", + "SlothfulAttention" + ], + { + "title_aux": "ComfyUI Slothful Attention" + } + ], + "https://github.com/MrForExample/ComfyUI-3D-Pack": [ + [], + { + "nodename_pattern": "^\\[Comfy3D\\]", + "title_aux": "ComfyUI-3D-Pack" + } + ], + "https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved": [ + [], + { + "nodename_pattern": "^\\[AnimateAnyone\\]", + "title_aux": "ComfyUI-AnimateAnyone-Evolved" + } + ], + "https://github.com/NicholasMcCarthy/ComfyUI_TravelSuite": [ + [ + "LatentTravel" + ], + { + "title_aux": "ComfyUI_TravelSuite" + } + ], + "https://github.com/NimaNzrii/comfyui-photoshop": [ + [ + "PhotoshopToComfyUI" + ], + { + "title_aux": "comfyui-photoshop" + } + ], + "https://github.com/NimaNzrii/comfyui-popup_preview": [ + [ + "PreviewPopup" + ], + { + "title_aux": "comfyui-popup_preview" + } + ], + "https://github.com/Niutonian/ComfyUi-NoodleWebcam": [ + [ + "WebcamNode" + ], + { + "title_aux": "ComfyUi-NoodleWebcam" + } + ], + "https://github.com/Nlar/ComfyUI_CartoonSegmentation": [ + [ + "AnimeSegmentation", + "KenBurnsConfigLoader", + "KenBurns_Processor", + "LoadImageFilename" + ], + { + "author": "Nels Larsen", + "description": "This extension offers a front end to the Cartoon Segmentation Project (https://github.com/CartoonSegmentation/CartoonSegmentation)", + "nickname": "CfyCS", + "title": "ComfyUI_CartoonSegmentation", + "title_aux": "ComfyUI_CartoonSegmentation" + } + ], + "https://github.com/NotHarroweD/Harronode": [ + [ + "Harronode" + ], + { + "author": "HarroweD and quadmoon (https://github.com/traugdor)", + "description": "This extension to ComfyUI will build a prompt for the Harrlogos LoRA for SDXL.", + "nickname": "Harronode", + "nodename_pattern": "Harronode", + "title": "Harrlogos Prompt Builder Node", + "title_aux": "Harronode" + } + ], + "https://github.com/Nourepide/ComfyUI-Allor": [ + [ + "AlphaChanelAdd", + "AlphaChanelAddByMask", + "AlphaChanelAsMask", + "AlphaChanelRemove", + "AlphaChanelRestore", + "ClipClamp", + "ClipVisionClamp", + "ClipVisionOutputClamp", + "ConditioningClamp", + "ControlNetClamp", + "GligenClamp", + "ImageBatchCopy", + "ImageBatchFork", + "ImageBatchGet", + "ImageBatchJoin", + "ImageBatchPermute", + "ImageBatchRemove", + "ImageClamp", + "ImageCompositeAbsolute", + "ImageCompositeAbsoluteByContainer", + "ImageCompositeRelative", + "ImageCompositeRelativeByContainer", + "ImageContainer", + "ImageContainerInheritanceAdd", + "ImageContainerInheritanceMax", + "ImageContainerInheritanceScale", + "ImageContainerInheritanceSum", + "ImageDrawArc", + "ImageDrawArcByContainer", + "ImageDrawChord", + "ImageDrawChordByContainer", + "ImageDrawEllipse", + "ImageDrawEllipseByContainer", + "ImageDrawLine", + "ImageDrawLineByContainer", + "ImageDrawPieslice", + "ImageDrawPiesliceByContainer", + "ImageDrawPolygon", + "ImageDrawRectangle", + "ImageDrawRectangleByContainer", + "ImageDrawRectangleRounded", + "ImageDrawRectangleRoundedByContainer", + "ImageEffectsAdjustment", + "ImageEffectsGrayscale", + "ImageEffectsLensBokeh", + "ImageEffectsLensChromaticAberration", + "ImageEffectsLensOpticAxis", + "ImageEffectsLensVignette", + "ImageEffectsLensZoomBurst", + "ImageEffectsNegative", + "ImageEffectsSepia", + "ImageFilterBilateralBlur", + "ImageFilterBlur", + "ImageFilterBoxBlur", + "ImageFilterContour", + "ImageFilterDetail", + "ImageFilterEdgeEnhance", + "ImageFilterEdgeEnhanceMore", + "ImageFilterEmboss", + "ImageFilterFindEdges", + "ImageFilterGaussianBlur", + "ImageFilterGaussianBlurAdvanced", + "ImageFilterMax", + "ImageFilterMedianBlur", + "ImageFilterMin", + "ImageFilterMode", + "ImageFilterRank", + "ImageFilterSharpen", + "ImageFilterSmooth", + "ImageFilterSmoothMore", + "ImageFilterStackBlur", + "ImageNoiseBeta", + "ImageNoiseBinomial", + "ImageNoiseBytes", + "ImageNoiseGaussian", + "ImageSegmentation", + "ImageSegmentationCustom", + "ImageSegmentationCustomAdvanced", + "ImageText", + "ImageTextMultiline", + "ImageTextMultilineOutlined", + "ImageTextOutlined", + "ImageTransformCropAbsolute", + "ImageTransformCropCorners", + "ImageTransformCropRelative", + "ImageTransformPaddingAbsolute", + "ImageTransformPaddingRelative", + "ImageTransformResizeAbsolute", + "ImageTransformResizeClip", + "ImageTransformResizeRelative", + "ImageTransformRotate", + "ImageTransformTranspose", + "LatentClamp", + "MaskClamp", + "ModelClamp", + "StyleModelClamp", + "UpscaleModelClamp", + "VaeClamp" + ], + { + "title_aux": "Allor Plugin" + } + ], + "https://github.com/Nuked88/ComfyUI-N-Nodes": [ + [ + "CLIPTextEncodeAdvancedNSuite [n-suite]", + "DynamicPrompt [n-suite]", + "Float Variable [n-suite]", + "FrameInterpolator [n-suite]", + "GPT Loader Simple [n-suite]", + "GPT Sampler [n-suite]", + "ImagePadForOutpaintAdvanced [n-suite]", + "Integer Variable [n-suite]", + "Llava Clip Loader [n-suite]", + "LoadFramesFromFolder [n-suite]", + "LoadVideo [n-suite]", + "SaveVideo [n-suite]", + "SetMetadataForSaveVideo [n-suite]", + "String Variable [n-suite]" + ], + { + "title_aux": "ComfyUI-N-Nodes" + } + ], + "https://github.com/Off-Live/ComfyUI-off-suite": [ + [ + "Apply CLAHE", + "Cached Image Load From URL", + "Crop Center wigh SEGS", + "Crop Center with SEGS", + "Dilate Mask for Each Face", + "GW Number Formatting", + "Image Crop Fit", + "Image Resize Fit", + "OFF SEGS to Image", + "Paste Face Segment to Image", + "Query Gender and Age", + "SEGS to Face Crop Data", + "Safe Mask to Image", + "VAE Encode For Inpaint V2", + "Watermarking" + ], + { + "title_aux": "ComfyUI-off-suite" + } + ], + "https://github.com/Onierous/QRNG_Node_ComfyUI/raw/main/qrng_node.py": [ + [ + "QRNG_Node_CSV" + ], + { + "title_aux": "QRNG_Node_ComfyUI" + } + ], + "https://github.com/PCMonsterx/ComfyUI-CSV-Loader": [ + [ + "Load Artists CSV", + "Load Artmovements CSV", + "Load Characters CSV", + "Load Colors CSV", + "Load Composition CSV", + "Load Lighting CSV", + "Load Negative CSV", + "Load Positive CSV", + "Load Settings CSV", + "Load Styles CSV" + ], + { + "title_aux": "ComfyUI-CSV-Loader" + } + ], + "https://github.com/ParmanBabra/ComfyUI-Malefish-Custom-Scripts": [ + [ + "CSVPromptsLoader", + "CombinePrompt", + "MultiLoraLoader", + "RandomPrompt" + ], + { + "title_aux": "ComfyUI-Malefish-Custom-Scripts" + } + ], + "https://github.com/Pfaeff/pfaeff-comfyui": [ + [ + "AstropulsePixelDetector", + "BackgroundRemover", + "ImagePadForBetterOutpaint", + "Inpainting", + "InpaintingPipelineLoader" + ], + { + "title_aux": "pfaeff-comfyui" + } + ], + "https://github.com/QaisMalkawi/ComfyUI-QaisHelper": [ + [ + "Bool Binary Operation", + "Bool Unary Operation", + "Item Debugger", + "Item Switch", + "Nearest SDXL Resolution", + "SDXL Resolution", + "Size Swapper" + ], + { + "title_aux": "ComfyUI-Qais-Helper" + } + ], + "https://github.com/RenderRift/ComfyUI-RenderRiftNodes": [ + [ + "AnalyseMetadata", + "DateIntegerNode", + "DisplayMetaOptions", + "LoadImageWithMeta", + "MetadataOverlayNode", + "VideoPathMetaExtraction" + ], + { + "title_aux": "ComfyUI-RenderRiftNodes" + } + ], + "https://github.com/Ryuukeisyou/comfyui_face_parsing": [ + [ + "BBoxListItemSelect(FaceParsing)", + "BBoxResize(FaceParsing)", + "ColorAdjust(FaceParsing)", + "FaceBBoxDetect(FaceParsing)", + "FaceBBoxDetectorLoader(FaceParsing)", + "FaceParse(FaceParsing)", + "FaceParsingModelLoader(FaceParsing)", + "FaceParsingProcessorLoader(FaceParsing)", + "FaceParsingResultsParser(FaceParsing)", + "GuidedFilter(FaceParsing)", + "ImageCropWithBBox(FaceParsing)", + "ImageInsertWithBBox(FaceParsing)", + "ImageListSelect(FaceParsing)", + "ImagePadWithBBox(FaceParsing)", + "ImageResizeCalculator(FaceParsing)", + "ImageResizeWithBBox(FaceParsing)", + "ImageSize(FaceParsing)", + "LatentCropWithBBox(FaceParsing)", + "LatentInsertWithBBox(FaceParsing)", + "LatentSize(FaceParsing)", + "MaskComposite(FaceParsing)", + "MaskListComposite(FaceParsing)", + "MaskListSelect(FaceParsing)", + "MaskToBBox(FaceParsing)", + "SkinDetectTraditional(FaceParsing)" + ], + { + "title_aux": "comfyui_face_parsing" + } + ], + "https://github.com/Ryuukeisyou/comfyui_image_io_helpers": [ + [ + "ImageLoadAsMaskByPath(ImageIOHelpers)", + "ImageLoadByPath(ImageIOHelpers)", + "ImageLoadFromBase64(ImageIOHelpers)", + "ImageSaveAsBase64(ImageIOHelpers)", + "ImageSaveToPath(ImageIOHelpers)" + ], + { + "title_aux": "comfyui_image_io_helpers" + } + ], + "https://github.com/SLAPaper/ComfyUI-Image-Selector": [ + [ + "ImageDuplicator", + "ImageSelector", + "LatentDuplicator", + "LatentSelector" + ], + { + "title_aux": "ComfyUI-Image-Selector" + } + ], + "https://github.com/SOELexicon/ComfyUI-LexMSDBNodes": [ + [ + "MSSqlSelectNode", + "MSSqlTableNode" + ], + { + "title_aux": "LexMSDBNodes" + } + ], + "https://github.com/SOELexicon/ComfyUI-LexTools": [ + [ + "AgeClassifierNode", + "ArtOrHumanClassifierNode", + "DocumentClassificationNode", + "FoodCategoryClassifierNode", + "ImageAspectPadNode", + "ImageCaptioning", + "ImageFilterByFloatScoreNode", + "ImageFilterByIntScoreNode", + "ImageQualityScoreNode", + "ImageRankingNode", + "ImageScaleToMin", + "MD5ImageHashNode", + "SamplerPropertiesNode", + "ScoreConverterNode", + "SeedIncrementerNode", + "SegformerNode", + "SegformerNodeMasks", + "SegformerNodeMergeSegments", + "StepCfgIncrementNode" + ], + { + "title_aux": "ComfyUI-LexTools" + } + ], + "https://github.com/SadaleNet/CLIPTextEncodeA1111-ComfyUI/raw/master/custom_nodes/clip_text_encoder_a1111.py": [ + [ + "CLIPTextEncodeA1111", + "RerouteTextForCLIPTextEncodeA1111" + ], + { + "title_aux": "ComfyUI A1111-like Prompt Custom Node Solution" + } + ], + "https://github.com/Scholar01/ComfyUI-Keyframe": [ + [ + "KeyframeApply", + "KeyframeInterpolationPart", + "KeyframePart" + ], + { + "title_aux": "SComfyUI-Keyframe" + } + ], + "https://github.com/SeargeDP/SeargeSDXL": [ + [ + "SeargeAdvancedParameters", + "SeargeCheckpointLoader", + "SeargeConditionMixing", + "SeargeConditioningMuxer2", + "SeargeConditioningMuxer5", + "SeargeConditioningParameters", + "SeargeControlnetAdapterV2", + "SeargeControlnetModels", + "SeargeCustomAfterUpscaling", + "SeargeCustomAfterVaeDecode", + "SeargeCustomPromptMode", + "SeargeDebugPrinter", + "SeargeEnablerInputs", + "SeargeFloatConstant", + "SeargeFloatMath", + "SeargeFloatPair", + "SeargeFreeU", + "SeargeGenerated1", + "SeargeGenerationParameters", + "SeargeHighResolution", + "SeargeImage2ImageAndInpainting", + "SeargeImageAdapterV2", + "SeargeImageSave", + "SeargeImageSaving", + "SeargeInput1", + "SeargeInput2", + "SeargeInput3", + "SeargeInput4", + "SeargeInput5", + "SeargeInput6", + "SeargeInput7", + "SeargeIntegerConstant", + "SeargeIntegerMath", + "SeargeIntegerPair", + "SeargeIntegerScaler", + "SeargeLatentMuxer3", + "SeargeLoraLoader", + "SeargeLoras", + "SeargeMagicBox", + "SeargeModelSelector", + "SeargeOperatingMode", + "SeargeOutput1", + "SeargeOutput2", + "SeargeOutput3", + "SeargeOutput4", + "SeargeOutput5", + "SeargeOutput6", + "SeargeOutput7", + "SeargeParameterProcessor", + "SeargePipelineStart", + "SeargePipelineTerminator", + "SeargePreviewImage", + "SeargePromptAdapterV2", + "SeargePromptCombiner", + "SeargePromptStyles", + "SeargePromptText", + "SeargeSDXLBasePromptEncoder", + "SeargeSDXLImage2ImageSampler", + "SeargeSDXLImage2ImageSampler2", + "SeargeSDXLPromptEncoder", + "SeargeSDXLRefinerPromptEncoder", + "SeargeSDXLSampler", + "SeargeSDXLSampler2", + "SeargeSDXLSamplerV3", + "SeargeSamplerAdvanced", + "SeargeSamplerInputs", + "SeargeSaveFolderInputs", + "SeargeSeparator", + "SeargeStylePreprocessor", + "SeargeTextInputV2", + "SeargeUpscaleModelLoader", + "SeargeUpscaleModels", + "SeargeVAELoader" + ], + { + "title_aux": "SeargeSDXL" + } + ], + "https://github.com/Ser-Hilary/SDXL_sizing/raw/main/conditioning_sizing_for_SDXL.py": [ + [ + "get_aspect_from_image", + "get_aspect_from_ints", + "sizing_node", + "sizing_node_basic", + "sizing_node_unparsed" + ], + { + "title_aux": "SDXL_sizing" + } + ], + "https://github.com/ShmuelRonen/ComfyUI-SVDResizer": [ + [ + "SVDRsizer" + ], + { + "title_aux": "ComfyUI-SVDResizer" + } + ], + "https://github.com/Shraknard/ComfyUI-Remover": [ + [ + "Remover" + ], + { + "title_aux": "ComfyUI-Remover" + } + ], + "https://github.com/Siberpone/lazy-pony-prompter": [ + [ + "LPP_Deleter", + "LPP_Derpibooru", + "LPP_E621", + "LPP_Loader_Derpibooru", + "LPP_Loader_E621", + "LPP_Saver" + ], + { + "title_aux": "Lazy Pony Prompter" + } + ], + "https://github.com/Smuzzies/comfyui_chatbox_overlay/raw/main/chatbox_overlay.py": [ + [ + "Chatbox Overlay" + ], + { + "title_aux": "Chatbox Overlay node for ComfyUI" + } + ], + "https://github.com/SoftMeng/ComfyUI_Mexx_Poster": [ + [ + "ComfyUI_Mexx_Poster" + ], + { + "title_aux": "ComfyUI_Mexx_Poster" + } + ], + "https://github.com/SoftMeng/ComfyUI_Mexx_Styler": [ + [ + "MexxSDXLPromptStyler", + "MexxSDXLPromptStylerAdvanced" + ], + { + "title_aux": "ComfyUI_Mexx_Styler" + } + ], + "https://github.com/SpaceKendo/ComfyUI-svd_txt2vid": [ + [ + "SVD_txt2vid_ConditioningwithLatent" + ], + { + "title_aux": "Text to video for Stable Video Diffusion in ComfyUI" + } + ], + "https://github.com/Stability-AI/stability-ComfyUI-nodes": [ + [ + "ColorBlend", + "ControlLoraSave", + "GetImageSize" + ], + { + "title_aux": "stability-ComfyUI-nodes" + } + ], + "https://github.com/StartHua/ComfyUI_Seg_VITON": [ + [ + "segformer_agnostic", + "segformer_clothes", + "segformer_remove_bg", + "stabel_vition" + ], + { + "title_aux": "ComfyUI_Seg_VITON" + } + ], + "https://github.com/StartHua/Comfyui_joytag": [ + [ + "CXH_JoyTag" + ], + { + "title_aux": "Comfyui_joytag" + } + ], + "https://github.com/StartHua/Comfyui_segformer_b2_clothes": [ + [ + "segformer_b2_clothes" + ], + { + "title_aux": "comfyui_segformer_b2_clothes" + } + ], + "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes": [ + [ + "CR 8 Channel In", + "CR 8 Channel Out", + "CR Apply ControlNet", + "CR Apply LoRA Stack", + "CR Apply Model Merge", + "CR Apply Multi Upscale", + "CR Apply Multi-ControlNet", + "CR Arabic Text RTL", + "CR Aspect Ratio", + "CR Aspect Ratio Banners", + "CR Aspect Ratio SDXL", + "CR Aspect Ratio Social Media", + "CR Batch Images From List", + "CR Batch Process Switch", + "CR Binary Pattern", + "CR Binary To Bit List", + "CR Bit Schedule", + "CR Central Schedule", + "CR Checker Pattern", + "CR Clamp Value", + "CR Clip Input Switch", + "CR Color Bars", + "CR Color Gradient", + "CR Color Panel", + "CR Color Tint", + "CR Combine Prompt", + "CR Combine Schedules", + "CR Comic Panel Templates", + "CR Composite Text", + "CR Conditioning Input Switch", + "CR Conditioning Mixer", + "CR ControlNet Input Switch", + "CR Current Frame", + "CR Cycle Images", + "CR Cycle Images Simple", + "CR Cycle LoRAs", + "CR Cycle Models", + "CR Cycle Text", + "CR Cycle Text Simple", + "CR Data Bus In", + "CR Data Bus Out", + "CR Debatch Frames", + "CR Diamond Panel", + "CR Draw Perspective Text", + "CR Draw Pie", + "CR Draw Shape", + "CR Draw Text", + "CR Encode Scheduled Prompts", + "CR Feathered Border", + "CR Float Range List", + "CR Float To Integer", + "CR Float To String", + "CR Font File List", + "CR Get Parameter From Prompt", + "CR Gradient Float", + "CR Gradient Integer", + "CR Half Drop Panel", + "CR Halftone Filter", + "CR Halftone Grid", + "CR Hires Fix Process Switch", + "CR Image Border", + "CR Image Grid Panel", + "CR Image Input Switch", + "CR Image Input Switch (4 way)", + "CR Image List", + "CR Image List Simple", + "CR Image Output", + "CR Image Panel", + "CR Image Pipe Edit", + "CR Image Pipe In", + "CR Image Pipe Out", + "CR Image Size", + "CR Img2Img Process Switch", + "CR Increment Float", + "CR Increment Integer", + "CR Index", + "CR Index Increment", + "CR Index Multiply", + "CR Index Reset", + "CR Input Text List", + "CR Integer Multiple", + "CR Integer Range List", + "CR Integer To String", + "CR Interpolate Latents", + "CR Intertwine Lists", + "CR Keyframe List", + "CR Latent Batch Size", + "CR Latent Input Switch", + "CR LoRA List", + "CR LoRA Stack", + "CR Load Animation Frames", + "CR Load Flow Frames", + "CR Load GIF As List", + "CR Load Image List", + "CR Load Image List Plus", + "CR Load LoRA", + "CR Load Prompt Style", + "CR Load Schedule From File", + "CR Load Scheduled ControlNets", + "CR Load Scheduled LoRAs", + "CR Load Scheduled Models", + "CR Load Text List", + "CR Mask Text", + "CR Math Operation", + "CR Model Input Switch", + "CR Model List", + "CR Model Merge Stack", + "CR Module Input", + "CR Module Output", + "CR Module Pipe Loader", + "CR Multi Upscale Stack", + "CR Multi-ControlNet Stack", + "CR Multiline Text", + "CR Output Flow Frames", + "CR Output Schedule To File", + "CR Overlay Text", + "CR Overlay Transparent Image", + "CR Page Layout", + "CR Pipe Switch", + "CR Polygons", + "CR Prompt List", + "CR Prompt List Keyframes", + "CR Prompt Scheduler", + "CR Prompt Text", + "CR Radial Gradient", + "CR Random Hex Color", + "CR Random LoRA Stack", + "CR Random Multiline Colors", + "CR Random Multiline Values", + "CR Random Panel Codes", + "CR Random RGB", + "CR Random RGB Gradient", + "CR Random Shape Pattern", + "CR Random Weight LoRA", + "CR Repeater", + "CR SD1.5 Aspect Ratio", + "CR SDXL Aspect Ratio", + "CR SDXL Base Prompt Encoder", + "CR SDXL Prompt Mix Presets", + "CR SDXL Prompt Mixer", + "CR SDXL Style Text", + "CR Save Text To File", + "CR Schedule Input Switch", + "CR Schedule To ScheduleList", + "CR Seamless Checker", + "CR Seed", + "CR Seed to Int", + "CR Select Font", + "CR Select ISO Size", + "CR Select Model", + "CR Select Resize Method", + "CR Set Switch From String", + "CR Set Value On Binary", + "CR Set Value On Boolean", + "CR Set Value on String", + "CR Simple Banner", + "CR Simple Binary Pattern", + "CR Simple Binary Pattern Simple", + "CR Simple Image Compare", + "CR Simple List", + "CR Simple Meme Template", + "CR Simple Prompt List", + "CR Simple Prompt List Keyframes", + "CR Simple Prompt Scheduler", + "CR Simple Schedule", + "CR Simple Text Panel", + "CR Simple Text Scheduler", + "CR Simple Text Watermark", + "CR Simple Titles", + "CR Simple Value Scheduler", + "CR Split String", + "CR Starburst Colors", + "CR Starburst Lines", + "CR String To Boolean", + "CR String To Combo", + "CR String To Number", + "CR Style Bars", + "CR Switch Model and CLIP", + "CR Text", + "CR Text Blacklist", + "CR Text Concatenate", + "CR Text Cycler", + "CR Text Input Switch", + "CR Text Input Switch (4 way)", + "CR Text Length", + "CR Text List", + "CR Text List Simple", + "CR Text List To String", + "CR Text Operation", + "CR Text Replace", + "CR Text Scheduler", + "CR Thumbnail Preview", + "CR Trigger", + "CR Upscale Image", + "CR VAE Decode", + "CR VAE Input Switch", + "CR Value", + "CR Value Cycler", + "CR Value Scheduler", + "CR Vignette Filter", + "CR XY From Folder", + "CR XY Index", + "CR XY Interpolate", + "CR XY List", + "CR XY Product", + "CR XY Save Grid Image", + "CR XYZ Index", + "CR_Aspect Ratio For Print" + ], + { + "author": "Suzie1", + "description": "175 custom nodes for artists, designers and animators.", + "nickname": "Comfyroll Studio", + "title": "Comfyroll Studio", + "title_aux": "ComfyUI_Comfyroll_CustomNodes" + } + ], + "https://github.com/Sxela/ComfyWarp": [ + [ + "ExtractOpticalFlow", + "LoadFrame", + "LoadFrameFromDataset", + "LoadFrameFromFolder", + "LoadFramePairFromDataset", + "LoadFrameSequence", + "MakeFrameDataset", + "MixConsistencyMaps", + "OffsetNumber", + "ResizeToFit", + "SaveFrame", + "WarpFrame" + ], + { + "title_aux": "ComfyWarp" + } + ], + "https://github.com/TGu-97/ComfyUI-TGu-utils": [ + [ + "MPNReroute", + "MPNSwitch", + "PNSwitch" + ], + { + "title_aux": "TGu Utilities" + } + ], + "https://github.com/THtianhao/ComfyUI-FaceChain": [ + [ + "FC CropAndPaste", + "FC CropBottom", + "FC CropToOrigin", + "FC FaceDetectCrop", + "FC FaceFusion", + "FC FaceSegAndReplace", + "FC FaceSegment", + "FC MaskOP", + "FC RemoveCannyFace", + "FC ReplaceByMask", + "FC StyleLoraLoad" + ], + { + "title_aux": "ComfyUI-FaceChain" + } + ], + "https://github.com/THtianhao/ComfyUI-Portrait-Maker": [ + [ + "PM_BoxCropImage", + "PM_ColorTransfer", + "PM_ExpandMaskBox", + "PM_FaceFusion", + "PM_FaceShapMatch", + "PM_FaceSkin", + "PM_GetImageInfo", + "PM_ImageResizeTarget", + "PM_ImageScaleShort", + "PM_MakeUpTransfer", + "PM_MaskDilateErode", + "PM_MaskMerge2Image", + "PM_PortraitEnhancement", + "PM_RatioMerge2Image", + "PM_ReplaceBoxImg", + "PM_RetinaFace", + "PM_Similarity", + "PM_SkinRetouching", + "PM_SuperColorTransfer", + "PM_SuperMakeUpTransfer" + ], + { + "title_aux": "ComfyUI-Portrait-Maker" + } + ], + "https://github.com/TRI3D-LC/tri3d-comfyui-nodes": [ + [ + "tri3d-adjust-neck", + "tri3d-atr-parse", + "tri3d-atr-parse-batch", + "tri3d-clipdrop-bgremove-api", + "tri3d-dwpose", + "tri3d-extract-hand", + "tri3d-extract-parts-batch", + "tri3d-extract-parts-batch2", + "tri3d-extract-parts-mask-batch", + "tri3d-face-recognise", + "tri3d-float-to-image", + "tri3d-fuzzification", + "tri3d-image-mask-2-box", + "tri3d-image-mask-box-2-image", + "tri3d-interaction-canny", + "tri3d-load-pose-json", + "tri3d-pose-adaption", + "tri3d-pose-to-image", + "tri3d-position-hands", + "tri3d-position-parts-batch", + "tri3d-recolor-mask", + "tri3d-recolor-mask-LAB_space", + "tri3d-recolor-mask-LAB_space_manual", + "tri3d-recolor-mask-RGB_space", + "tri3d-skin-feathered-padded-mask", + "tri3d-swap-pixels" + ], + { + "title_aux": "tri3d-comfyui-nodes" + } + ], + "https://github.com/Taremin/comfyui-prompt-extranetworks": [ + [ + "PromptExtraNetworks" + ], + { + "title_aux": "ComfyUI Prompt ExtraNetworks" + } + ], + "https://github.com/Taremin/comfyui-string-tools": [ + [ + "StringToolsBalancedChoice", + "StringToolsConcat", + "StringToolsRandomChoice", + "StringToolsString", + "StringToolsText" + ], + { + "title_aux": "ComfyUI String Tools" + } + ], + "https://github.com/TeaCrab/ComfyUI-TeaNodes": [ + [ + "TC_ColorFill", + "TC_EqualizeCLAHE", + "TC_ImageResize", + "TC_ImageScale", + "TC_RandomColorFill", + "TC_SizeApproximation" + ], + { + "title_aux": "ComfyUI-TeaNodes" + } + ], + "https://github.com/TemryL/ComfyS3": [ + [ + "DownloadFileS3", + "LoadImageS3", + "SaveImageS3", + "SaveVideoFilesS3", + "UploadFileS3" + ], + { + "title_aux": "ComfyS3" + } + ], + "https://github.com/TheBarret/ZSuite": [ + [ + "ZSuite: Prompter", + "ZSuite: RF Noise", + "ZSuite: SeedMod" + ], + { + "title_aux": "ZSuite" + } + ], + "https://github.com/TinyTerra/ComfyUI_tinyterraNodes": [ + [ + "ttN busIN", + "ttN busOUT", + "ttN compareInput", + "ttN concat", + "ttN debugInput", + "ttN float", + "ttN hiresfixScale", + "ttN imageOutput", + "ttN imageREMBG", + "ttN int", + "ttN multiModelMerge", + "ttN pipe2BASIC", + "ttN pipe2DETAILER", + "ttN pipeEDIT", + "ttN pipeEncodeConcat", + "ttN pipeIN", + "ttN pipeKSampler", + "ttN pipeKSamplerAdvanced", + "ttN pipeKSamplerSDXL", + "ttN pipeLoader", + "ttN pipeLoaderSDXL", + "ttN pipeLoraStack", + "ttN pipeOUT", + "ttN seed", + "ttN seedDebug", + "ttN text", + "ttN text3BOX_3WAYconcat", + "ttN text7BOX_concat", + "ttN textDebug", + "ttN xyPlot" + ], + { + "author": "tinyterra", + "description": "This extension offers various pipe nodes, fullscreen image viewer based on node history, dynamic widgets, interface customization, and more.", + "nickname": "ttNodes", + "nodename_pattern": "^ttN ", + "title": "tinyterraNodes", + "title_aux": "tinyterraNodes" + } + ], + "https://github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler": [ + [ + "menus" + ], + { + "title_aux": "ComfyUI_MileHighStyler" + } + ], + "https://github.com/Tropfchen/ComfyUI-Embedding_Picker": [ + [ + "EmbeddingPicker" + ], + { + "title_aux": "Embedding Picker" + } + ], + "https://github.com/Tropfchen/ComfyUI-yaResolutionSelector": [ + [ + "YARS", + "YARSAdv" + ], + { + "title_aux": "YARS: Yet Another Resolution Selector" + } + ], + "https://github.com/Trung0246/ComfyUI-0246": [ + [ + "0246.Beautify", + "0246.BoxRange", + "0246.CastReroute", + "0246.Cloud", + "0246.Convert", + "0246.Count", + "0246.Highway", + "0246.HighwayBatch", + "0246.Hold", + "0246.Hub", + "0246.Junction", + "0246.JunctionBatch", + "0246.Loop", + "0246.Merge", + "0246.Meta", + "0246.Pick", + "0246.RandomInt", + "0246.Script", + "0246.ScriptNode", + "0246.ScriptPile", + "0246.ScriptRule", + "0246.Stringify", + "0246.Switch" + ], + { + "author": "Trung0246", + "description": "Random nodes for ComfyUI I made to solve my struggle with ComfyUI (ex: pipe, process). Have varying quality.", + "nickname": "ComfyUI-0246", + "title": "ComfyUI-0246", + "title_aux": "ComfyUI-0246" + } + ], + "https://github.com/Ttl/ComfyUi_NNLatentUpscale": [ + [ + "NNLatentUpscale" + ], + { + "title_aux": "ComfyUI Neural network latent upscale custom node" + } + ], + "https://github.com/Umikaze-job/select_folder_path_easy": [ + [ + "SelectFolderPathEasy" + ], + { + "title_aux": "select_folder_path_easy" + } + ], + "https://github.com/WASasquatch/ASTERR": [ + [ + "ASTERR", + "SaveASTERR" + ], + { + "title_aux": "ASTERR" + } + ], + "https://github.com/WASasquatch/ComfyUI_Preset_Merger": [ + [ + "Preset_Model_Merge" + ], + { + "title_aux": "ComfyUI Preset Merger" + } + ], + "https://github.com/WASasquatch/FreeU_Advanced": [ + [ + "FreeU (Advanced)", + "FreeU_V2 (Advanced)" + ], + { + "title_aux": "FreeU_Advanced" + } + ], + "https://github.com/WASasquatch/PPF_Noise_ComfyUI": [ + [ + "Blend Latents (PPF Noise)", + "Cross-Hatch Power Fractal (PPF Noise)", + "Images as Latents (PPF Noise)", + "Perlin Power Fractal Latent (PPF Noise)" + ], + { + "title_aux": "PPF_Noise_ComfyUI" + } + ], + "https://github.com/WASasquatch/PowerNoiseSuite": [ + [ + "Blend Latents (PPF Noise)", + "Cross-Hatch Power Fractal (PPF Noise)", + "Cross-Hatch Power Fractal Settings (PPF Noise)", + "Images as Latents (PPF Noise)", + "Latent Adjustment (PPF Noise)", + "Latents to CPU (PPF Noise)", + "Linear Cross-Hatch Power Fractal (PPF Noise)", + "Perlin Power Fractal Latent (PPF Noise)", + "Perlin Power Fractal Settings (PPF Noise)", + "Power KSampler Advanced (PPF Noise)", + "Power-Law Noise (PPF Noise)" + ], + { + "title_aux": "Power Noise Suite for ComfyUI" + } + ], + "https://github.com/WASasquatch/WAS_Extras": [ + [ + "BLVAEEncode", + "CLIPTextEncodeList", + "CLIPTextEncodeSequence2", + "ConditioningBlend", + "DebugInput", + "KSamplerSeq", + "KSamplerSeq2", + "VAEEncodeForInpaint (WAS)", + "VividSharpen" + ], + { + "title_aux": "WAS_Extras" + } + ], + "https://github.com/WASasquatch/was-node-suite-comfyui": [ + [ + "BLIP Analyze Image", + "BLIP Model Loader", + "Blend Latents", + "Boolean To Text", + "Bounded Image Blend", + "Bounded Image Blend with Mask", + "Bounded Image Crop", + "Bounded Image Crop with Mask", + "Bus Node", + "CLIP Input Switch", + "CLIP Vision Input Switch", + "CLIPSeg Batch Masking", + "CLIPSeg Masking", + "CLIPSeg Model Loader", + "CLIPTextEncode (BlenderNeko Advanced + NSP)", + "CLIPTextEncode (NSP)", + "Cache Node", + "Checkpoint Loader", + "Checkpoint Loader (Simple)", + "Conditioning Input Switch", + "Constant Number", + "Control Net Model Input Switch", + "Convert Masks to Images", + "Create Grid Image", + "Create Grid Image from Batch", + "Create Morph Image", + "Create Morph Image from Path", + "Create Video from Path", + "Debug Number to Console", + "Dictionary to Console", + "Diffusers Hub Model Down-Loader", + "Diffusers Model Loader", + "Export API", + "Image Analyze", + "Image Aspect Ratio", + "Image Batch", + "Image Blank", + "Image Blend", + "Image Blend by Mask", + "Image Blending Mode", + "Image Bloom Filter", + "Image Bounds", + "Image Bounds to Console", + "Image Canny Filter", + "Image Chromatic Aberration", + "Image Color Palette", + "Image Crop Face", + "Image Crop Location", + "Image Crop Square Location", + "Image Displacement Warp", + "Image Dragan Photography Filter", + "Image Edge Detection Filter", + "Image Film Grain", + "Image Filter Adjustments", + "Image Flip", + "Image Generate Gradient", + "Image Gradient Map", + "Image High Pass Filter", + "Image History Loader", + "Image Input Switch", + "Image Levels Adjustment", + "Image Load", + "Image Lucy Sharpen", + "Image Median Filter", + "Image Mix RGB Channels", + "Image Monitor Effects Filter", + "Image Nova Filter", + "Image Padding", + "Image Paste Crop", + "Image Paste Crop by Location", + "Image Paste Face", + "Image Perlin Noise", + "Image Perlin Power Fractal", + "Image Pixelate", + "Image Power Noise", + "Image Rembg (Remove Background)", + "Image Remove Background (Alpha)", + "Image Remove Color", + "Image Resize", + "Image Rotate", + "Image Rotate Hue", + "Image SSAO (Ambient Occlusion)", + "Image SSDO (Direct Occlusion)", + "Image Save", + "Image Seamless Texture", + "Image Select Channel", + "Image Select Color", + "Image Shadows and Highlights", + "Image Size to Number", + "Image Stitch", + "Image Style Filter", + "Image Threshold", + "Image Tiled", + "Image Transpose", + "Image Voronoi Noise Filter", + "Image fDOF Filter", + "Image to Latent Mask", + "Image to Noise", + "Image to Seed", + "Images to Linear", + "Images to RGB", + "Inset Image Bounds", + "Integer place counter", + "KSampler (WAS)", + "KSampler Cycle", + "Latent Batch", + "Latent Input Switch", + "Latent Noise Injection", + "Latent Size to Number", + "Latent Upscale by Factor (WAS)", + "Load Cache", + "Load Image Batch", + "Load Lora", + "Load Text File", + "Logic Boolean", + "Logic Boolean Primitive", + "Logic Comparison AND", + "Logic Comparison OR", + "Logic Comparison XOR", + "Logic NOT", + "Lora Input Switch", + "Lora Loader", + "Mask Arbitrary Region", + "Mask Batch", + "Mask Batch to Mask", + "Mask Ceiling Region", + "Mask Crop Dominant Region", + "Mask Crop Minority Region", + "Mask Crop Region", + "Mask Dilate Region", + "Mask Dominant Region", + "Mask Erode Region", + "Mask Fill Holes", + "Mask Floor Region", + "Mask Gaussian Region", + "Mask Invert", + "Mask Minority Region", + "Mask Paste Region", + "Mask Smooth Region", + "Mask Threshold Region", + "Masks Add", + "Masks Combine Batch", + "Masks Combine Regions", + "Masks Subtract", + "MiDaS Depth Approximation", + "MiDaS Mask Image", + "MiDaS Model Loader", + "Model Input Switch", + "Number Counter", + "Number Input Condition", + "Number Input Switch", + "Number Multiple Of", + "Number Operation", + "Number PI", + "Number to Float", + "Number to Int", + "Number to Seed", + "Number to String", + "Number to Text", + "Prompt Multiple Styles Selector", + "Prompt Styles Selector", + "Random Number", + "SAM Image Mask", + "SAM Model Loader", + "SAM Parameters", + "SAM Parameters Combine", + "Samples Passthrough (Stat System)", + "Save Text File", + "Seed", + "String to Text", + "Tensor Batch to Image", + "Text Add Token by Input", + "Text Add Tokens", + "Text Compare", + "Text Concatenate", + "Text Contains", + "Text Dictionary Convert", + "Text Dictionary Get", + "Text Dictionary Keys", + "Text Dictionary New", + "Text Dictionary To Text", + "Text Dictionary Update", + "Text File History Loader", + "Text Find and Replace", + "Text Find and Replace Input", + "Text Find and Replace by Dictionary", + "Text Input Switch", + "Text List", + "Text List Concatenate", + "Text List to Text", + "Text Load Line From File", + "Text Multiline", + "Text Parse A1111 Embeddings", + "Text Parse Noodle Soup Prompts", + "Text Parse Tokens", + "Text Random Line", + "Text Random Prompt", + "Text Shuffle", + "Text String", + "Text String Truncate", + "Text to Conditioning", + "Text to Console", + "Text to Number", + "Text to String", + "True Random.org Number Generator", + "Upscale Model Loader", + "Upscale Model Switch", + "VAE Input Switch", + "Video Dump Frames", + "Write to GIF", + "Write to Video", + "unCLIP Checkpoint Loader" + ], + { + "title_aux": "WAS Node Suite" + } + ], + "https://github.com/WebDev9000/WebDev9000-Nodes": [ + [ + "IgnoreBraces", + "SettingsSwitch" + ], + { + "title_aux": "WebDev9000-Nodes" + } + ], + "https://github.com/YMC-GitHub/ymc-node-suite-comfyui": [ + [ + "canvas-util-cal-size", + "conditioning-util-input-switch", + "cutoff-region-util", + "hks-util-cal-denoise-step", + "img-util-get-image-size", + "img-util-switch-input-image", + "io-image-save", + "io-text-save", + "io-util-file-list-get", + "io-util-file-list-get-text", + "number-util-random-num", + "pipe-util-to-basic-pipe", + "region-util-get-by-center-and-size", + "region-util-get-by-lt", + "region-util-get-crop-location-from-center-size-text", + "region-util-get-pad-out-location-by-size", + "text-preset-colors", + "text-util-join-text", + "text-util-loop-text", + "text-util-path-list", + "text-util-prompt-add-prompt", + "text-util-prompt-adv-dup", + "text-util-prompt-adv-search", + "text-util-prompt-del", + "text-util-prompt-dup", + "text-util-prompt-join", + "text-util-prompt-search", + "text-util-prompt-shuffle", + "text-util-prompt-std", + "text-util-prompt-unweight", + "text-util-random-text", + "text-util-search-text", + "text-util-show-text", + "text-util-switch-text", + "xyz-util-txt-to-int" + ], + { + "title_aux": "ymc-node-suite-comfyui" + } + ], + "https://github.com/YOUR-WORST-TACO/ComfyUI-TacoNodes": [ + [ + "Example", + "TacoAnimatedLoader", + "TacoGifMaker", + "TacoImg2ImgAnimatedLoader", + "TacoImg2ImgAnimatedProcessor", + "TacoLatent" + ], + { + "title_aux": "ComfyUI-TacoNodes" + } + ], + "https://github.com/YinBailiang/MergeBlockWeighted_fo_ComfyUI": [ + [ + "MergeBlockWeighted" + ], + { + "title_aux": "MergeBlockWeighted_fo_ComfyUI" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-ArtGallery": [ + [ + "ArtGallery_Zho", + "ArtistsImage_Zho", + "CamerasImage_Zho", + "FilmsImage_Zho", + "MovementsImage_Zho", + "StylesImage_Zho" + ], + { + "title_aux": "ComfyUI-ArtGallery" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Gemini": [ + [ + "ConcatText_Zho", + "DisplayText_Zho", + "Gemini_API_Chat_Zho", + "Gemini_API_S_Chat_Zho", + "Gemini_API_S_Vsion_ImgURL_Zho", + "Gemini_API_S_Zho", + "Gemini_API_Vsion_ImgURL_Zho", + "Gemini_API_Zho" + ], + { + "title_aux": "ComfyUI-Gemini" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID": [ + [ + "IDBaseModelLoader_fromhub", + "IDBaseModelLoader_local", + "IDControlNetLoader", + "IDGenerationNode", + "ID_Prompt_Styler", + "InsightFaceLoader_Zho", + "Ipadapter_instantidLoader" + ], + { + "title_aux": "ComfyUI-InstantID" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-PhotoMaker-ZHO": [ + [ + "BaseModel_Loader_fromhub", + "BaseModel_Loader_local", + "LoRALoader", + "NEW_PhotoMaker_Generation", + "PhotoMakerAdapter_Loader_fromhub", + "PhotoMakerAdapter_Loader_local", + "PhotoMaker_Generation", + "Prompt_Styler", + "Ref_Image_Preprocessing" + ], + { + "title_aux": "ComfyUI PhotoMaker (ZHO)" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Q-Align": [ + [ + "QAlign_Zho" + ], + { + "title_aux": "ComfyUI-Q-Align" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Qwen-VL-API": [ + [ + "QWenVL_API_S_Multi_Zho", + "QWenVL_API_S_Zho" + ], + { + "title_aux": "ComfyUI-Qwen-VL-API" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SVD-ZHO": [ + [ + "SVD_Aspect_Ratio_Zho", + "SVD_Steps_MotionStrength_Seed_Zho", + "SVD_Styler_Zho" + ], + { + "title_aux": "ComfyUI-SVD-ZHO (WIP)" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-SegMoE": [ + [ + "SMoE_Generation_Zho", + "SMoE_ModelLoader_Zho" + ], + { + "title_aux": "ComfyUI SegMoE" + } + ], + "https://github.com/ZHO-ZHO-ZHO/ComfyUI-Text_Image-Composite": [ + [ + "AlphaChanelAddByMask", + "ImageCompositeBy_BG_Zho", + "ImageCompositeBy_Zho", + "ImageComposite_BG_Zho", + "ImageComposite_Zho", + "RGB_Image_Zho", + "Text_Image_Frame_Zho", + "Text_Image_Multiline_Zho", + "Text_Image_Zho" + ], + { + "title_aux": "ComfyUI-Text_Image-Composite [WIP]" + } + ], + "https://github.com/ZHO-ZHO-ZHO/comfyui-portrait-master-zh-cn": [ + [ + "PortraitMaster_\u4e2d\u6587\u7248" + ], + { + "title_aux": "comfyui-portrait-master-zh-cn" + } + ], + "https://github.com/ZaneA/ComfyUI-ImageReward": [ + [ + "ImageRewardLoader", + "ImageRewardScore" + ], + { + "title_aux": "ImageReward" + } + ], + "https://github.com/Zuellni/ComfyUI-ExLlama": [ + [ + "ZuellniExLlamaGenerator", + "ZuellniExLlamaLoader", + "ZuellniTextPreview", + "ZuellniTextReplace" + ], + { + "title_aux": "ComfyUI-ExLlama" + } + ], + "https://github.com/Zuellni/ComfyUI-PickScore-Nodes": [ + [ + "ZuellniPickScoreImageProcessor", + "ZuellniPickScoreLoader", + "ZuellniPickScoreSelector", + "ZuellniPickScoreTextProcessor" + ], + { + "title_aux": "ComfyUI PickScore Nodes" + } + ], + "https://github.com/a1lazydog/ComfyUI-AudioScheduler": [ + [ + "AmplitudeToGraph", + "AmplitudeToNumber", + "AudioToAmplitudeGraph", + "AudioToFFTs", + "BatchAmplitudeSchedule", + "ClipAmplitude", + "GateNormalizedAmplitude", + "LoadAudio", + "NormalizeAmplitude", + "NormalizedAmplitudeDrivenString", + "NormalizedAmplitudeToGraph", + "NormalizedAmplitudeToNumber", + "TransientAmplitudeBasic" + ], + { + "title_aux": "ComfyUI-AudioScheduler" + } + ], + "https://github.com/abdozmantar/ComfyUI-InstaSwap": [ + [ + "InstaSwapFaceSwap", + "InstaSwapLoadFaceModel", + "InstaSwapSaveFaceModel" + ], + { + "title_aux": "InstaSwap Face Swap Node for ComfyUI" + } + ], + "https://github.com/abyz22/image_control": [ + [ + "abyz22_Convertpipe", + "abyz22_Editpipe", + "abyz22_FirstNonNull", + "abyz22_FromBasicPipe_v2", + "abyz22_Frompipe", + "abyz22_ImpactWildcardEncode", + "abyz22_ImpactWildcardEncode_GetPrompt", + "abyz22_Ksampler", + "abyz22_Padding Image", + "abyz22_RemoveControlnet", + "abyz22_SaveImage", + "abyz22_SetQueue", + "abyz22_ToBasicPipe", + "abyz22_Topipe", + "abyz22_blend_onecolor", + "abyz22_blendimages", + "abyz22_bypass", + "abyz22_drawmask", + "abyz22_lamaInpaint", + "abyz22_lamaPreprocessor", + "abyz22_makecircles", + "abyz22_setimageinfo", + "abyz22_smallhead" + ], + { + "title_aux": "image_control" + } + ], + "https://github.com/adbrasi/ComfyUI-TrashNodes-DownloadHuggingface": [ + [ + "DownloadLinkChecker", + "ShowFileNames" + ], + { + "title_aux": "ComfyUI-TrashNodes-DownloadHuggingface" + } + ], + "https://github.com/adieyal/comfyui-dynamicprompts": [ + [ + "DPCombinatorialGenerator", + "DPFeelingLucky", + "DPJinja", + "DPMagicPrompt", + "DPOutput", + "DPRandomGenerator" + ], + { + "title_aux": "DynamicPrompts Custom Nodes" + } + ], + "https://github.com/adriflex/ComfyUI_Blender_Texdiff": [ + [ + "ViewportColor", + "ViewportDepth" + ], + { + "title_aux": "ComfyUI_Blender_Texdiff" + } + ], + "https://github.com/aegis72/aegisflow_utility_nodes": [ + [ + "Add Text To Image", + "Aegisflow CLIP Pass", + "Aegisflow Conditioning Pass", + "Aegisflow Image Pass", + "Aegisflow Latent Pass", + "Aegisflow Mask Pass", + "Aegisflow Model Pass", + "Aegisflow Pos/Neg Pass", + "Aegisflow SDXL Tuple Pass", + "Aegisflow VAE Pass", + "Aegisflow controlnet preprocessor bus", + "Apply Instagram Filter", + "Brightness_Contrast_Ally", + "Flatten Colors", + "Gaussian Blur_Ally", + "GlitchThis Effect", + "Hue Rotation", + "Image Flip_ally", + "Placeholder Tuple", + "Swap Color Mode", + "aegisflow Multi_Pass", + "aegisflow Multi_Pass XL", + "af_pipe_in_15", + "af_pipe_in_xl", + "af_pipe_out_15", + "af_pipe_out_xl" + ], + { + "title_aux": "AegisFlow Utility Nodes" + } + ], + "https://github.com/aegis72/comfyui-styles-all": [ + [ + "menus" + ], + { + "title_aux": "ComfyUI-styles-all" + } + ], + "https://github.com/ai-liam/comfyui_liam_util": [ + [ + "LiamLoadImage" + ], + { + "title_aux": "LiamUtil" + } + ], + "https://github.com/aianimation55/ComfyUI-FatLabels": [ + [ + "FatLabels" + ], + { + "title_aux": "Comfy UI FatLabels" + } + ], + "https://github.com/alexopus/ComfyUI-Image-Saver": [ + [ + "Cfg Literal (Image Saver)", + "Checkpoint Loader with Name (Image Saver)", + "Float Literal (Image Saver)", + "Image Saver", + "Int Literal (Image Saver)", + "Sampler Selector (Image Saver)", + "Scheduler Selector (Image Saver)", + "Seed Generator (Image Saver)", + "String Literal (Image Saver)", + "Width/Height Literal (Image Saver)" + ], + { + "title_aux": "ComfyUI Image Saver" + } + ], + "https://github.com/alpertunga-bile/prompt-generator-comfyui": [ + [ + "Prompt Generator" + ], + { + "title_aux": "prompt-generator" + } + ], + "https://github.com/alsritter/asymmetric-tiling-comfyui": [ + [ + "Asymmetric_Tiling_KSampler" + ], + { + "title_aux": "asymmetric-tiling-comfyui" + } + ], + "https://github.com/alt-key-project/comfyui-dream-project": [ + [ + "Analyze Palette [Dream]", + "Beat Curve [Dream]", + "Big Float Switch [Dream]", + "Big Image Switch [Dream]", + "Big Int Switch [Dream]", + "Big Latent Switch [Dream]", + "Big Palette Switch [Dream]", + "Big Text Switch [Dream]", + "Boolean To Float [Dream]", + "Boolean To Int [Dream]", + "Build Prompt [Dream]", + "CSV Curve [Dream]", + "CSV Generator [Dream]", + "Calculation [Dream]", + "Common Frame Dimensions [Dream]", + "Compare Palettes [Dream]", + "FFMPEG Video Encoder [Dream]", + "File Count [Dream]", + "Finalize Prompt [Dream]", + "Float Input [Dream]", + "Float to Log Entry [Dream]", + "Frame Count Calculator [Dream]", + "Frame Counter (Directory) [Dream]", + "Frame Counter (Simple) [Dream]", + "Frame Counter Info [Dream]", + "Frame Counter Offset [Dream]", + "Frame Counter Time Offset [Dream]", + "Image Brightness Adjustment [Dream]", + "Image Color Shift [Dream]", + "Image Contrast Adjustment [Dream]", + "Image Motion [Dream]", + "Image Sequence Blend [Dream]", + "Image Sequence Loader [Dream]", + "Image Sequence Saver [Dream]", + "Image Sequence Tweening [Dream]", + "Int Input [Dream]", + "Int to Log Entry [Dream]", + "Laboratory [Dream]", + "Linear Curve [Dream]", + "Log Entry Joiner [Dream]", + "Log File [Dream]", + "Noise from Area Palettes [Dream]", + "Noise from Palette [Dream]", + "Palette Color Align [Dream]", + "Palette Color Shift [Dream]", + "Sample Image Area as Palette [Dream]", + "Sample Image as Palette [Dream]", + "Saw Curve [Dream]", + "Sine Curve [Dream]", + "Smooth Event Curve [Dream]", + "String Input [Dream]", + "String Tokenizer [Dream]", + "String to Log Entry [Dream]", + "Text Input [Dream]", + "Triangle Curve [Dream]", + "Triangle Event Curve [Dream]", + "WAV Curve [Dream]" + ], + { + "title_aux": "Dream Project Animation Nodes" + } + ], + "https://github.com/alt-key-project/comfyui-dream-video-batches": [ + [ + "Blended Transition [DVB]", + "Calculation [DVB]", + "Create Frame Set [DVB]", + "Divide [DVB]", + "Fade From Black [DVB]", + "Fade To Black [DVB]", + "Float Input [DVB]", + "For Each Done [DVB]", + "For Each Filename [DVB]", + "Frame Set Append [DVB]", + "Frame Set Frame Dimensions Scaled [DVB]", + "Frame Set Index Offset [DVB]", + "Frame Set Merger [DVB]", + "Frame Set Reindex [DVB]", + "Frame Set Repeat [DVB]", + "Frame Set Reverse [DVB]", + "Frame Set Split Beginning [DVB]", + "Frame Set Split End [DVB]", + "Frame Set Splitter [DVB]", + "Generate Inbetween Frames [DVB]", + "Int Input [DVB]", + "Linear Camera Pan [DVB]", + "Linear Camera Roll [DVB]", + "Linear Camera Zoom [DVB]", + "Load Image From Path [DVB]", + "Multiply [DVB]", + "Sine Camera Pan [DVB]", + "Sine Camera Roll [DVB]", + "Sine Camera Zoom [DVB]", + "String Input [DVB]", + "Text Input [DVB]", + "Trace Memory Allocation [DVB]", + "Unwrap Frame Set [DVB]" + ], + { + "title_aux": "Dream Video Batches" + } + ], + "https://github.com/an90ray/ComfyUI_RErouter_CustomNodes": [ + [ + "CLIPTextEncode (RE)", + "CLIPTextEncodeSDXL (RE)", + "CLIPTextEncodeSDXLRefiner (RE)", + "Int (RE)", + "RErouter <=", + "RErouter =>", + "String (RE)" + ], + { + "title_aux": "ComfyUI_RErouter_CustomNodes" + } + ], + "https://github.com/andersxa/comfyui-PromptAttention": [ + [ + "CLIPAttentionMaskEncode" + ], + { + "title_aux": "CLIP Directional Prompt Attention" + } + ], + "https://github.com/antrobot1234/antrobots-comfyUI-nodepack": [ + [ + "composite", + "crop", + "paste", + "preview_mask", + "scale" + ], + { + "title_aux": "antrobots ComfyUI Nodepack" + } + ], + "https://github.com/asagi4/ComfyUI-CADS": [ + [ + "CADS" + ], + { + "title_aux": "ComfyUI-CADS" + } + ], + "https://github.com/asagi4/comfyui-prompt-control": [ + [ + "EditableCLIPEncode", + "FilterSchedule", + "LoRAScheduler", + "PCApplySettings", + "PCPromptFromSchedule", + "PCScheduleSettings", + "PCSplitSampling", + "PromptControlSimple", + "PromptToSchedule", + "ScheduleToCond", + "ScheduleToModel" + ], + { + "title_aux": "ComfyUI prompt control" + } + ], + "https://github.com/asagi4/comfyui-utility-nodes": [ + [ + "MUForceCacheClear", + "MUJinjaRender", + "MUSimpleWildcard" + ], + { + "title_aux": "asagi4/comfyui-utility-nodes" + } + ], + "https://github.com/aszc-dev/ComfyUI-CoreMLSuite": [ + [ + "Core ML Converter", + "Core ML LCM Converter", + "Core ML LoRA Loader", + "CoreMLModelAdapter", + "CoreMLSampler", + "CoreMLSamplerAdvanced", + "CoreMLUNetLoader" + ], + { + "title_aux": "Core ML Suite for ComfyUI" + } + ], + "https://github.com/avatechai/avatar-graph-comfyui": [ + [ + "ApplyMeshTransformAsShapeKey", + "B_ENUM", + "B_VECTOR3", + "B_VECTOR4", + "Combine Points", + "CreateShapeFlow", + "ExportBlendshapes", + "ExportGLTF", + "Extract Boundary Points", + "Image Alpha Mask Merge", + "ImageBridge", + "LoadImageFromRequest", + "LoadImageWithAlpha", + "LoadValueFromRequest", + "SAM MultiLayer", + "Save Image With Workflow" + ], + { + "author": "Avatech Limited", + "description": "Include nodes for sam + bpy operation, that allows workflow creations for generative 2d character rig.", + "nickname": "Avatar Graph", + "title": "Avatar Graph", + "title_aux": "avatar-graph-comfyui" + } + ], + "https://github.com/azure-dragon-ai/ComfyUI-ClipScore-Nodes": [ + [ + "HaojihuiClipScoreFakeImageProcessor", + "HaojihuiClipScoreImageProcessor", + "HaojihuiClipScoreImageScore", + "HaojihuiClipScoreLoader", + "HaojihuiClipScoreRealImageProcessor", + "HaojihuiClipScoreTextProcessor" + ], + { + "title_aux": "ComfyUI-ClipScore-Nodes" + } + ], + "https://github.com/badjeff/comfyui_lora_tag_loader": [ + [ + "LoraTagLoader" + ], + { + "title_aux": "LoRA Tag Loader for ComfyUI" + } + ], + "https://github.com/banodoco/steerable-motion": [ + [ + "BatchCreativeInterpolation" + ], + { + "title_aux": "Steerable Motion" + } + ], + "https://github.com/bash-j/mikey_nodes": [ + [ + "AddMetaData", + "Batch Crop Image", + "Batch Crop Resize Inplace", + "Batch Load Images", + "Batch Resize Image for SDXL", + "Checkpoint Loader Simple Mikey", + "CinematicLook", + "Empty Latent Ratio Custom SDXL", + "Empty Latent Ratio Select SDXL", + "EvalFloats", + "FaceFixerOpenCV", + "FileNamePrefix", + "FileNamePrefixDateDirFirst", + "Float to String", + "HaldCLUT", + "Image Caption", + "ImageBorder", + "ImageOverlay", + "ImagePaste", + "Int to String", + "LMStudioPrompt", + "Load Image Based on Number", + "LoraSyntaxProcessor", + "Mikey Sampler", + "Mikey Sampler Base Only", + "Mikey Sampler Base Only Advanced", + "Mikey Sampler Tiled", + "Mikey Sampler Tiled Base Only", + "MikeySamplerTiledAdvanced", + "MikeySamplerTiledAdvancedBaseOnly", + "OobaPrompt", + "PresetRatioSelector", + "Prompt With SDXL", + "Prompt With Style", + "Prompt With Style V2", + "Prompt With Style V3", + "Range Float", + "Range Integer", + "Ratio Advanced", + "Resize Image for SDXL", + "Save Image If True", + "Save Image With Prompt Data", + "Save Images Mikey", + "Save Images No Display", + "SaveMetaData", + "SearchAndReplace", + "Seed String", + "Style Conditioner", + "Style Conditioner Base Only", + "Text2InputOr3rdOption", + "TextCombinations", + "TextCombinations3", + "TextConcat", + "TextPreserve", + "Upscale Tile Calculator", + "Wildcard Processor", + "WildcardAndLoraSyntaxProcessor", + "WildcardOobaPrompt" + ], + { + "title_aux": "Mikey Nodes" + } + ], + "https://github.com/bedovyy/ComfyUI_NAIDGenerator": [ + [ + "GenerateNAID", + "Img2ImgOptionNAID", + "InpaintingOptionNAID", + "MaskImageToNAID", + "ModelOptionNAID", + "PromptToNAID" + ], + { + "title_aux": "ComfyUI_NAIDGenerator" + } + ], + "https://github.com/biegert/ComfyUI-CLIPSeg/raw/main/custom_nodes/clipseg.py": [ + [ + "CLIPSeg", + "CombineSegMasks" + ], + { + "title_aux": "CLIPSeg" + } + ], + "https://github.com/bilal-arikan/ComfyUI_TextAssets": [ + [ + "LoadTextAsset" + ], + { + "title_aux": "ComfyUI_TextAssets" + } + ], + "https://github.com/blepping/ComfyUI-bleh": [ + [ + "BlehDeepShrink", + "BlehDiscardPenultimateSigma", + "BlehForceSeedSampler", + "BlehHyperTile", + "BlehInsaneChainSampler", + "BlehModelPatchConditional" + ], + { + "title_aux": "ComfyUI-bleh" + } + ], + "https://github.com/blepping/ComfyUI-sonar": [ + [ + "NoisyLatentLike", + "SamplerSonarDPMPPSDE", + "SamplerSonarEuler", + "SamplerSonarEulerA", + "SonarCustomNoise", + "SonarGuidanceConfig" + ], + { + "title_aux": "ComfyUI-sonar" + } + ], + "https://github.com/bmad4ever/comfyui_ab_samplercustom": [ + [ + "AB SamplerCustom (experimental)" + ], + { + "title_aux": "comfyui_ab_sampler" + } + ], + "https://github.com/bmad4ever/comfyui_bmad_nodes": [ + [ + "AdaptiveThresholding", + "Add String To Many", + "AddAlpha", + "AdjustRect", + "AnyToAny", + "BoundingRect (contours)", + "BuildColorRangeAdvanced (hsv)", + "BuildColorRangeHSV (hsv)", + "CLAHE", + "CLIPEncodeMultiple", + "CLIPEncodeMultipleAdvanced", + "ChameleonMask", + "CheckpointLoader (dirty)", + "CheckpointLoaderSimple (dirty)", + "Color (RGB)", + "Color (hexadecimal)", + "Color Clip", + "Color Clip (advanced)", + "Color Clip ADE20k", + "ColorDictionary", + "ColorDictionary (custom)", + "Conditioning (combine multiple)", + "Conditioning (combine selective)", + "Conditioning Grid (cond)", + "Conditioning Grid (string)", + "Conditioning Grid (string) Advanced", + "Contour To Mask", + "Contours", + "ControlNetHadamard", + "ControlNetHadamard (manual)", + "ConvertImg", + "CopyMakeBorder", + "CreateRequestMetadata", + "DistanceTransform", + "Draw Contour(s)", + "EqualizeHistogram", + "ExtendColorList", + "ExtendCondList", + "ExtendFloatList", + "ExtendImageList", + "ExtendIntList", + "ExtendLatentList", + "ExtendMaskList", + "ExtendModelList", + "ExtendStringList", + "FadeMaskEdges", + "Filter Contour", + "FindComplementaryColor", + "FindThreshold", + "FlatLatentsIntoSingleGrid", + "Framed Mask Grab Cut", + "Framed Mask Grab Cut 2", + "FromListGet1Color", + "FromListGet1Cond", + "FromListGet1Float", + "FromListGet1Image", + "FromListGet1Int", + "FromListGet1Latent", + "FromListGet1Mask", + "FromListGet1Model", + "FromListGet1String", + "FromListGetColors", + "FromListGetConds", + "FromListGetFloats", + "FromListGetImages", + "FromListGetInts", + "FromListGetLatents", + "FromListGetMasks", + "FromListGetModels", + "FromListGetStrings", + "Get Contour from list", + "Get Models", + "Get Prompt", + "HypernetworkLoader (dirty)", + "ImageBatchToList", + "InRange (hsv)", + "Inpaint", + "Input/String to Int Array", + "KMeansColor", + "Load 64 Encoded Image", + "LoraLoader (dirty)", + "MaskGrid N KSamplers Advanced", + "MaskOuterBlur", + "Merge Latent Batch Gridwise", + "MonoMerge", + "MorphologicOperation", + "MorphologicSkeletoning", + "NaiveAutoKMeansColor", + "OtsuThreshold", + "RGB to HSV", + "Rect Grab Cut", + "Remap", + "RemapBarrelDistortion", + "RemapFromInsideParabolas", + "RemapFromQuadrilateral (homography)", + "RemapInsideParabolas", + "RemapInsideParabolasAdvanced", + "RemapPinch", + "RemapReverseBarrelDistortion", + "RemapStretch", + "RemapToInnerCylinder", + "RemapToOuterCylinder", + "RemapToQuadrilateral", + "RemapWarpPolar", + "Repeat Into Grid (image)", + "Repeat Into Grid (latent)", + "RequestInputs", + "SampleColorHSV", + "Save Image (api)", + "SeamlessClone", + "SeamlessClone (simple)", + "SetRequestStateToComplete", + "String", + "String to Float", + "String to Integer", + "ToColorList", + "ToCondList", + "ToFloatList", + "ToImageList", + "ToIntList", + "ToLatentList", + "ToMaskList", + "ToModelList", + "ToStringList", + "UnGridify (image)", + "VAEEncodeBatch" + ], + { + "title_aux": "Bmad Nodes" + } + ], + "https://github.com/bmad4ever/comfyui_lists_cartesian_product": [ + [ + "AnyListCartesianProduct" + ], + { + "title_aux": "Lists Cartesian Product" + } + ], + "https://github.com/bradsec/ComfyUI_ResolutionSelector": [ + [ + "ResolutionSelector" + ], + { + "title_aux": "ResolutionSelector for ComfyUI" + } + ], + "https://github.com/braintacles/braintacles-comfyui-nodes": [ + [ + "CLIPTextEncodeSDXL-Multi-IO", + "CLIPTextEncodeSDXL-Pipe", + "Empty Latent Image from Aspect-Ratio", + "Random Find and Replace", + "VAE Decode Pipe", + "VAE Decode Tiled Pipe", + "VAE Encode Pipe", + "VAE Encode Tiled Pipe" + ], + { + "title_aux": "braintacles-nodes" + } + ], + "https://github.com/brianfitzgerald/style_aligned_comfy": [ + [ + "StyleAlignedBatchAlign", + "StyleAlignedReferenceSampler", + "StyleAlignedSampleReferenceLatents" + ], + { + "title_aux": "StyleAligned for ComfyUI" + } + ], + "https://github.com/bronkula/comfyui-fitsize": [ + [ + "FS: Crop Image Into Even Pieces", + "FS: Fit Image And Resize", + "FS: Fit Size From Image", + "FS: Fit Size From Int", + "FS: Image Region To Mask", + "FS: Load Image And Resize To Fit", + "FS: Pick Image From Batch", + "FS: Pick Image From Batches", + "FS: Pick Image From List" + ], + { + "title_aux": "comfyui-fitsize" + } + ], + "https://github.com/bruefire/ComfyUI-SeqImageLoader": [ + [ + "VFrame Loader With Mask Editor", + "Video Loader With Mask Editor" + ], + { + "title_aux": "ComfyUI Sequential Image Loader" + } + ], + "https://github.com/budihartono/comfyui_otonx_nodes": [ + [ + "OTX Integer Multiple Inputs 4", + "OTX Integer Multiple Inputs 5", + "OTX Integer Multiple Inputs 6", + "OTX KSampler Feeder", + "OTX Versatile Multiple Inputs 4", + "OTX Versatile Multiple Inputs 5", + "OTX Versatile Multiple Inputs 6" + ], + { + "title_aux": "Otonx's Custom Nodes" + } + ], + "https://github.com/bvhari/ComfyUI_ImageProcessing": [ + [ + "BilateralFilter", + "Brightness", + "Gamma", + "Hue", + "Saturation", + "SigmoidCorrection", + "UnsharpMask" + ], + { + "title_aux": "ImageProcessing" + } + ], + "https://github.com/bvhari/ComfyUI_LatentToRGB": [ + [ + "LatentToRGB" + ], + { + "title_aux": "LatentToRGB" + } + ], + "https://github.com/bvhari/ComfyUI_PerpWeight": [ + [ + "CLIPTextEncodePerpWeight" + ], + { + "title_aux": "ComfyUI_PerpWeight" + } + ], + "https://github.com/catscandrive/comfyui-imagesubfolders/raw/main/loadImageWithSubfolders.py": [ + [ + "LoadImagewithSubfolders" + ], + { + "title_aux": "Image loader with subfolders" + } + ], + "https://github.com/celsojr2013/comfyui_simpletools/raw/main/google_translator.py": [ + [ + "GoogleTranslator" + ], + { + "title_aux": "ComfyUI SimpleTools Suit" + } + ], + "https://github.com/ceruleandeep/ComfyUI-LLaVA-Captioner": [ + [ + "LlavaCaptioner" + ], + { + "title_aux": "ComfyUI LLaVA Captioner" + } + ], + "https://github.com/chaojie/ComfyUI-DragNUWA": [ + [ + "BrushMotion", + "CompositeMotionBrush", + "CompositeMotionBrushWithoutModel", + "DragNUWA Run", + "DragNUWA Run MotionBrush", + "Get First Image", + "Get Last Image", + "InstantCameraMotionBrush", + "InstantObjectMotionBrush", + "Load CheckPoint DragNUWA", + "Load MotionBrush From Optical Flow", + "Load MotionBrush From Optical Flow Directory", + "Load MotionBrush From Optical Flow Without Model", + "Load MotionBrush From Tracking Points", + "Load MotionBrush From Tracking Points Without Model", + "Load Pose KeyPoints", + "Loop", + "LoopEnd_IMAGE", + "LoopStart_IMAGE", + "Split Tracking Points" + ], + { + "title_aux": "ComfyUI-DragNUWA" + } + ], + "https://github.com/chaojie/ComfyUI-DynamiCrafter": [ + [ + "DynamiCrafter Simple", + "DynamiCrafterLoader" + ], + { + "title_aux": "ComfyUI-DynamiCrafter" + } + ], + "https://github.com/chaojie/ComfyUI-I2VGEN-XL": [ + [ + "I2VGEN-XL Simple", + "Modelscope Pipeline Loader" + ], + { + "title_aux": "ComfyUI-I2VGEN-XL" + } + ], + "https://github.com/chaojie/ComfyUI-LightGlue": [ + [ + "LightGlue Loader", + "LightGlue Simple", + "LightGlue Simple Multi" + ], + { + "title_aux": "ComfyUI-LightGlue" + } + ], + "https://github.com/chaojie/ComfyUI-Moore-AnimateAnyone": [ + [ + "Moore-AnimateAnyone Denoising Unet", + "Moore-AnimateAnyone Image Encoder", + "Moore-AnimateAnyone Pipeline Loader", + "Moore-AnimateAnyone Pose Guider", + "Moore-AnimateAnyone Reference Unet", + "Moore-AnimateAnyone Simple", + "Moore-AnimateAnyone VAE" + ], + { + "title_aux": "ComfyUI-Moore-AnimateAnyone" + } + ], + "https://github.com/chaojie/ComfyUI-Motion-Vector-Extractor": [ + [ + "Motion Vector Extractor", + "VideoCombineThenPath" + ], + { + "title_aux": "ComfyUI-Motion-Vector-Extractor" + } + ], + "https://github.com/chaojie/ComfyUI-MotionCtrl": [ + [ + "Load Motion Camera Preset", + "Load Motion Traj Preset", + "Load Motionctrl Checkpoint", + "Motionctrl Cond", + "Motionctrl Sample", + "Motionctrl Sample Simple", + "Select Image Indices" + ], + { + "title_aux": "ComfyUI-MotionCtrl" + } + ], + "https://github.com/chaojie/ComfyUI-MotionCtrl-SVD": [ + [ + "Load Motionctrl-SVD Camera Preset", + "Load Motionctrl-SVD Checkpoint", + "Motionctrl-SVD Sample Simple" + ], + { + "title_aux": "ComfyUI-MotionCtrl-SVD" + } + ], + "https://github.com/chaojie/ComfyUI-Panda3d": [ + [ + "Panda3dAmbientLight", + "Panda3dAttachNewNode", + "Panda3dBase", + "Panda3dDirectionalLight", + "Panda3dLoadDepthModel", + "Panda3dLoadModel", + "Panda3dLoadTexture", + "Panda3dModelMerge", + "Panda3dTest", + "Panda3dTextureMerge" + ], + { + "title_aux": "ComfyUI-Panda3d" + } + ], + "https://github.com/chaojie/ComfyUI-Pymunk": [ + [ + "PygameRun", + "PygameSurface", + "PymunkDynamicBox", + "PymunkDynamicCircle", + "PymunkRun", + "PymunkShapeMerge", + "PymunkSpace", + "PymunkStaticLine" + ], + { + "title_aux": "ComfyUI-Pymunk" + } + ], + "https://github.com/chaojie/ComfyUI-RAFT": [ + [ + "Load MotionBrush", + "RAFT Run", + "Save MotionBrush", + "VizMotionBrush" + ], + { + "title_aux": "ComfyUI-RAFT" + } + ], + "https://github.com/chflame163/ComfyUI_LayerStyle": [ + [ + "LayerColor: Brightness & Contrast", + "LayerColor: ColorAdapter", + "LayerColor: Exposure", + "LayerColor: Gamma", + "LayerColor: HSV", + "LayerColor: LAB", + "LayerColor: LUT Apply", + "LayerColor: RGB", + "LayerColor: YUV", + "LayerFilter: ChannelShake", + "LayerFilter: ColorMap", + "LayerFilter: GaussianBlur", + "LayerFilter: MotionBlur", + "LayerFilter: Sharp & Soft", + "LayerFilter: SkinBeauty", + "LayerFilter: SoftLight", + "LayerFilter: WaterColor", + "LayerMask: CreateGradientMask", + "LayerMask: MaskBoxDetect", + "LayerMask: MaskByDifferent", + "LayerMask: MaskEdgeShrink", + "LayerMask: MaskEdgeUltraDetail", + "LayerMask: MaskGradient", + "LayerMask: MaskGrow", + "LayerMask: MaskInvert", + "LayerMask: MaskMotionBlur", + "LayerMask: MaskPreview", + "LayerMask: MaskStroke", + "LayerMask: PixelSpread", + "LayerMask: RemBgUltra", + "LayerMask: SegmentAnythingUltra", + "LayerStyle: ColorOverlay", + "LayerStyle: DropShadow", + "LayerStyle: GradientOverlay", + "LayerStyle: InnerGlow", + "LayerStyle: InnerShadow", + "LayerStyle: OuterGlow", + "LayerStyle: Stroke", + "LayerUtility: ColorImage", + "LayerUtility: ColorPicker", + "LayerUtility: CropByMask", + "LayerUtility: ExtendCanvas", + "LayerUtility: GetColorTone", + "LayerUtility: GetImageSize", + "LayerUtility: GradientImage", + "LayerUtility: ImageBlend", + "LayerUtility: ImageBlendAdvance", + "LayerUtility: ImageChannelMerge", + "LayerUtility: ImageChannelSplit", + "LayerUtility: ImageMaskScaleAs", + "LayerUtility: ImageOpacity", + "LayerUtility: ImageScaleRestore", + "LayerUtility: ImageShift", + "LayerUtility: LayerImageTransform", + "LayerUtility: LayerMaskTransform", + "LayerUtility: PrintInfo", + "LayerUtility: RestoreCropBox", + "LayerUtility: TextImage", + "LayerUtility: XY to Percent" + ], + { + "title_aux": "ComfyUI Layer Style" + } + ], + "https://github.com/chflame163/ComfyUI_MSSpeech_TTS": [ + [ + "Input Trigger", + "MicrosoftSpeech_TTS", + "Play Sound", + "Play Sound (loop)" + ], + { + "title_aux": "ComfyUI_MSSpeech_TTS" + } + ], + "https://github.com/chflame163/ComfyUI_WordCloud": [ + [ + "ComfyWordCloud", + "LoadTextFile", + "RGB_Picker" + ], + { + "title_aux": "ComfyUI_WordCloud" + } + ], + "https://github.com/chibiace/ComfyUI-Chibi-Nodes": [ + [ + "ConditionText", + "ConditionTextMulti", + "ImageAddText", + "ImageSimpleResize", + "ImageSizeInfo", + "ImageTool", + "Int2String", + "LoadEmbedding", + "LoadImageExtended", + "Loader", + "Prompts", + "RandomResolutionLatent", + "SaveImages", + "SeedGenerator", + "SimpleSampler", + "TextSplit", + "Textbox", + "Wildcards" + ], + { + "title_aux": "ComfyUI-Chibi-Nodes" + } + ], + "https://github.com/chrisgoringe/cg-image-picker": [ + [ + "Preview Chooser", + "Preview Chooser Fabric" + ], + { + "author": "chrisgoringe", + "description": "Custom nodes that preview images and pause the workflow to allow the user to select one or more to progress", + "nickname": "Image Chooser", + "title": "Image Chooser", + "title_aux": "Image chooser" + } + ], + "https://github.com/chrisgoringe/cg-noise": [ + [ + "Hijack", + "KSampler Advanced with Variations", + "KSampler with Variations", + "UnHijack" + ], + { + "title_aux": "Variation seeds" + } + ], + "https://github.com/chrisgoringe/cg-use-everywhere": [ + [ + "Seed Everywhere" + ], + { + "nodename_pattern": "(^(Prompts|Anything) Everywhere|Simple String)", + "title_aux": "Use Everywhere (UE Nodes)" + } + ], + "https://github.com/city96/ComfyUI_ColorMod": [ + [ + "ColorModEdges", + "ColorModPivot", + "LoadImageHighPrec", + "PreviewImageHighPrec", + "SaveImageHighPrec" + ], + { + "title_aux": "ComfyUI_ColorMod" + } + ], + "https://github.com/city96/ComfyUI_DiT": [ + [ + "DiTCheckpointLoader", + "DiTCheckpointLoaderSimple", + "DiTLabelCombine", + "DiTLabelSelect", + "DiTSampler" + ], + { + "title_aux": "ComfyUI_DiT [WIP]" + } + ], + "https://github.com/city96/ComfyUI_ExtraModels": [ + [ + "DiTCondLabelEmpty", + "DiTCondLabelSelect", + "DitCheckpointLoader", + "ExtraVAELoader", + "PixArtCheckpointLoader", + "PixArtDPMSampler", + "PixArtLoraLoader", + "PixArtResolutionSelect", + "PixArtT5TextEncode", + "T5TextEncode", + "T5v11Loader" + ], + { + "title_aux": "Extra Models for ComfyUI" + } + ], + "https://github.com/city96/ComfyUI_NetDist": [ + [ + "CombineImageBatch", + "FetchRemote", + "LoadCurrentWorkflowJSON", + "LoadDiskWorkflowJSON", + "LoadImageUrl", + "LoadLatentNumpy", + "LoadLatentUrl", + "RemoteChainEnd", + "RemoteChainStart", + "RemoteQueueSimple", + "RemoteQueueWorker", + "SaveDiskWorkflowJSON", + "SaveImageUrl", + "SaveLatentNumpy" + ], + { + "title_aux": "ComfyUI_NetDist" + } + ], + "https://github.com/city96/SD-Advanced-Noise": [ + [ + "LatentGaussianNoise", + "MathEncode" + ], + { + "title_aux": "SD-Advanced-Noise" + } + ], + "https://github.com/city96/SD-Latent-Interposer": [ + [ + "LatentInterposer" + ], + { + "title_aux": "Latent-Interposer" + } + ], + "https://github.com/city96/SD-Latent-Upscaler": [ + [ + "LatentUpscaler" + ], + { + "title_aux": "SD-Latent-Upscaler" + } + ], + "https://github.com/civitai/comfy-nodes": [ + [ + "CivitAI_Checkpoint_Loader", + "CivitAI_Lora_Loader" + ], + { + "title_aux": "comfy-nodes" + } + ], + "https://github.com/comfyanonymous/ComfyUI": [ + [ + "BasicScheduler", + "CLIPLoader", + "CLIPMergeSimple", + "CLIPSave", + "CLIPSetLastLayer", + "CLIPTextEncode", + "CLIPTextEncodeControlnet", + "CLIPTextEncodeSDXL", + "CLIPTextEncodeSDXLRefiner", + "CLIPVisionEncode", + "CLIPVisionLoader", + "Canny", + "CheckpointLoader", + "CheckpointLoaderSimple", + "CheckpointSave", + "ConditioningAverage", + "ConditioningCombine", + "ConditioningConcat", + "ConditioningSetArea", + "ConditioningSetAreaPercentage", + "ConditioningSetAreaStrength", + "ConditioningSetMask", + "ConditioningSetTimestepRange", + "ConditioningZeroOut", + "ControlNetApply", + "ControlNetApplyAdvanced", + "ControlNetLoader", + "CropMask", + "DiffControlNetLoader", + "DiffusersLoader", + "DualCLIPLoader", + "EmptyImage", + "EmptyLatentImage", + "ExponentialScheduler", + "FeatherMask", + "FlipSigmas", + "FreeU", + "FreeU_V2", + "GLIGENLoader", + "GLIGENTextBoxApply", + "GrowMask", + "HyperTile", + "HypernetworkLoader", + "ImageBatch", + "ImageBlend", + "ImageBlur", + "ImageColorToMask", + "ImageCompositeMasked", + "ImageCrop", + "ImageFromBatch", + "ImageInvert", + "ImageOnlyCheckpointLoader", + "ImageOnlyCheckpointSave", + "ImagePadForOutpaint", + "ImageQuantize", + "ImageScale", + "ImageScaleBy", + "ImageScaleToTotalPixels", + "ImageSharpen", + "ImageToMask", + "ImageUpscaleWithModel", + "InpaintModelConditioning", + "InvertMask", + "JoinImageWithAlpha", + "KSampler", + "KSamplerAdvanced", + "KSamplerSelect", + "KarrasScheduler", + "LatentAdd", + "LatentBatch", + "LatentBatchSeedBehavior", + "LatentBlend", + "LatentComposite", + "LatentCompositeMasked", + "LatentCrop", + "LatentFlip", + "LatentFromBatch", + "LatentInterpolate", + "LatentMultiply", + "LatentRotate", + "LatentSubtract", + "LatentUpscale", + "LatentUpscaleBy", + "LoadImage", + "LoadImageMask", + "LoadLatent", + "LoraLoader", + "LoraLoaderModelOnly", + "MaskComposite", + "MaskToImage", + "ModelMergeAdd", + "ModelMergeBlocks", + "ModelMergeSimple", + "ModelMergeSubtract", + "ModelSamplingContinuousEDM", + "ModelSamplingDiscrete", + "ModelSamplingStableCascade", + "PatchModelAddDownscale", + "PerpNeg", + "PhotoMakerEncode", + "PhotoMakerLoader", + "PolyexponentialScheduler", + "PorterDuffImageComposite", + "PreviewImage", + "RebatchImages", + "RebatchLatents", + "RepeatImageBatch", + "RepeatLatentBatch", + "RescaleCFG", + "SDTurboScheduler", + "SD_4XUpscale_Conditioning", + "SVD_img2vid_Conditioning", + "SamplerCustom", + "SamplerDPMPP_2M_SDE", + "SamplerDPMPP_SDE", + "SaveAnimatedPNG", + "SaveAnimatedWEBP", + "SaveImage", + "SaveLatent", + "SelfAttentionGuidance", + "SetLatentNoiseMask", + "SolidMask", + "SplitImageWithAlpha", + "SplitSigmas", + "StableCascade_EmptyLatentImage", + "StableCascade_StageB_Conditioning", + "StableZero123_Conditioning", + "StableZero123_Conditioning_Batched", + "StyleModelApply", + "StyleModelLoader", + "TomePatchModel", + "UNETLoader", + "UpscaleModelLoader", + "VAEDecode", + "VAEDecodeTiled", + "VAEEncode", + "VAEEncodeForInpaint", + "VAEEncodeTiled", + "VAELoader", + "VAESave", + "VPScheduler", + "VideoLinearCFGGuidance", + "unCLIPCheckpointLoader", + "unCLIPConditioning" + ], + { + "title_aux": "ComfyUI" + } + ], + "https://github.com/comfyanonymous/ComfyUI_experiments": [ + [ + "ModelMergeBlockNumber", + "ModelMergeSDXL", + "ModelMergeSDXLDetailedTransformers", + "ModelMergeSDXLTransformers", + "ModelSamplerTonemapNoiseTest", + "ReferenceOnlySimple", + "RescaleClassifierFreeGuidanceTest", + "TonemapNoiseWithRescaleCFG" + ], + { + "title_aux": "ComfyUI_experiments" + } + ], + "https://github.com/concarne000/ConCarneNode": [ + [ + "BingImageGrabber", + "Zephyr" + ], + { + "title_aux": "ConCarneNode" + } + ], + "https://github.com/coreyryanhanson/ComfyQR": [ + [ + "comfy-qr-by-image-size", + "comfy-qr-by-module-size", + "comfy-qr-by-module-split", + "comfy-qr-mask_errors" + ], + { + "title_aux": "ComfyQR" + } + ], + "https://github.com/coreyryanhanson/ComfyQR-scanning-nodes": [ + [ + "comfy-qr-read", + "comfy-qr-validate" + ], + { + "title_aux": "ComfyQR-scanning-nodes" + } + ], + "https://github.com/cubiq/ComfyUI_IPAdapter_plus": [ + [ + "IPAdapterApply", + "IPAdapterApplyEncoded", + "IPAdapterApplyFaceID", + "IPAdapterBatchEmbeds", + "IPAdapterEncoder", + "IPAdapterLoadEmbeds", + "IPAdapterModelLoader", + "IPAdapterSaveEmbeds", + "IPAdapterTilesMasked", + "InsightFaceLoader", + "PrepImageForClipVision", + "PrepImageForInsightFace" + ], + { + "title_aux": "ComfyUI_IPAdapter_plus" + } + ], + "https://github.com/cubiq/ComfyUI_InstantID": [ + [ + "ApplyInstantID", + "FaceKeypointsPreprocessor", + "InstantIDFaceAnalysis", + "InstantIDModelLoader" + ], + { + "title_aux": "ComfyUI InstantID (Native Support)" + } + ], + "https://github.com/cubiq/ComfyUI_SimpleMath": [ + [ + "SimpleMath", + "SimpleMathDebug" + ], + { + "title_aux": "Simple Math" + } + ], + "https://github.com/cubiq/ComfyUI_essentials": [ + [ + "BatchCount+", + "CLIPTextEncodeSDXL+", + "ConsoleDebug+", + "DebugTensorShape+", + "DrawText+", + "ExtractKeyframes+", + "GetImageSize+", + "ImageApplyLUT+", + "ImageCASharpening+", + "ImageCompositeFromMaskBatch+", + "ImageCrop+", + "ImageDesaturate+", + "ImageEnhanceDifference+", + "ImageExpandBatch+", + "ImageFlip+", + "ImageFromBatch+", + "ImagePosterize+", + "ImageRemoveBackground+", + "ImageResize+", + "ImageSeamCarving+", + "KSamplerVariationsStochastic+", + "KSamplerVariationsWithNoise+", + "MaskBatch+", + "MaskBlur+", + "MaskExpandBatch+", + "MaskFlip+", + "MaskFromBatch+", + "MaskFromColor+", + "MaskPreview+", + "ModelCompile+", + "NoiseFromImage~", + "RemBGSession+", + "RemoveLatentMask+", + "SDXLEmptyLatentSizePicker+", + "SimpleMath+", + "TransitionMask+" + ], + { + "title_aux": "ComfyUI Essentials" + } + ], + "https://github.com/dagthomas/comfyui_dagthomas": [ + [ + "CSL", + "CSVPromptGenerator", + "PromptGenerator" + ], + { + "title_aux": "SDXL Auto Prompter" + } + ], + "https://github.com/daniel-lewis-ab/ComfyUI-Llama": [ + [ + "Call LLM Advanced", + "Call LLM Basic", + "LLM_Create_Completion Advanced", + "LLM_Detokenize", + "LLM_Embed", + "LLM_Eval", + "LLM_Load_State", + "LLM_Reset", + "LLM_Sample", + "LLM_Save_State", + "LLM_Token_BOS", + "LLM_Token_EOS", + "LLM_Tokenize", + "Load LLM Model Advanced", + "Load LLM Model Basic" + ], + { + "title_aux": "ComfyUI-Llama" + } + ], + "https://github.com/daniel-lewis-ab/ComfyUI-TTS": [ + [ + "Load_Piper_Model", + "Piper_Speak_Text" + ], + { + "title_aux": "ComfyUI-TTS" + } + ], + "https://github.com/darkpixel/darkprompts": [ + [ + "DarkCombine", + "DarkFaceIndexShuffle", + "DarkLoRALoader", + "DarkPrompt" + ], + { + "title_aux": "DarkPrompts" + } + ], + "https://github.com/davask/ComfyUI-MarasIT-Nodes": [ + [ + "MarasitBusNode", + "MarasitBusPipeNode", + "MarasitPipeNodeBasic", + "MarasitUniversalBusNode" + ], + { + "title_aux": "MarasIT Nodes" + } + ], + "https://github.com/dave-palt/comfyui_DSP_imagehelpers": [ + [ + "dsp-imagehelpers-concat" + ], + { + "title_aux": "comfyui_DSP_imagehelpers" + } + ], + "https://github.com/dawangraoming/ComfyUI_ksampler_gpu/raw/main/ksampler_gpu.py": [ + [ + "KSamplerAdvancedGPU", + "KSamplerGPU" + ], + { + "title_aux": "KSampler GPU" + } + ], + "https://github.com/daxthin/DZ-FaceDetailer": [ + [ + "DZ_Face_Detailer" + ], + { + "title_aux": "DZ-FaceDetailer" + } + ], + "https://github.com/deroberon/StableZero123-comfyui": [ + [ + "SDZero ImageSplit", + "Stablezero123", + "Stablezero123WithDepth" + ], + { + "title_aux": "StableZero123-comfyui" + } + ], + "https://github.com/deroberon/demofusion-comfyui": [ + [ + "Batch Unsampler", + "Demofusion", + "Demofusion From Single File", + "Iterative Mixing KSampler" + ], + { + "title_aux": "demofusion-comfyui" + } + ], + "https://github.com/dfl/comfyui-clip-with-break": [ + [ + "AdvancedCLIPTextEncodeWithBreak", + "CLIPTextEncodeWithBreak" + ], + { + "author": "dfl", + "description": "CLIP text encoder that does BREAK prompting like A1111", + "nickname": "CLIP with BREAK", + "title": "CLIP with BREAK syntax", + "title_aux": "comfyui-clip-with-break" + } + ], + "https://github.com/digitaljohn/comfyui-propost": [ + [ + "ProPostApplyLUT", + "ProPostDepthMapBlur", + "ProPostFilmGrain", + "ProPostRadialBlur", + "ProPostVignette" + ], + { + "title_aux": "ComfyUI-ProPost" + } + ], + "https://github.com/dimtoneff/ComfyUI-PixelArt-Detector": [ + [ + "PixelArtAddDitherPattern", + "PixelArtDetectorConverter", + "PixelArtDetectorSave", + "PixelArtDetectorToImage", + "PixelArtLoadPalettes" + ], + { + "title_aux": "ComfyUI PixelArt Detector" + } + ], + "https://github.com/diontimmer/ComfyUI-Vextra-Nodes": [ + [ + "Add Text To Image", + "Apply Instagram Filter", + "Create Solid Color", + "Flatten Colors", + "Generate Noise Image", + "GlitchThis Effect", + "Hue Rotation", + "Load Picture Index", + "Pixel Sort", + "Play Sound At Execution", + "Prettify Prompt Using distilgpt2", + "Swap Color Mode" + ], + { + "title_aux": "ComfyUI-Vextra-Nodes" + } + ], + "https://github.com/djbielejeski/a-person-mask-generator": [ + [ + "APersonMaskGenerator" + ], + { + "title_aux": "a-person-mask-generator" + } + ], + "https://github.com/dmarx/ComfyUI-AudioReactive": [ + [ + "OpAbs", + "OpBandpass", + "OpClamp", + "OpHarmonic", + "OpModulo", + "OpNormalize", + "OpNovelty", + "OpPercussive", + "OpPow", + "OpPow2", + "OpPredominant_pulse", + "OpQuantize", + "OpRms", + "OpSmoosh", + "OpSmooth", + "OpSqrt", + "OpStretch", + "OpSustain", + "OpThreshold" + ], + { + "title_aux": "ComfyUI-AudioReactive" + } + ], + "https://github.com/dmarx/ComfyUI-Keyframed": [ + [ + "Example", + "KfAddCurveToPGroup", + "KfAddCurveToPGroupx10", + "KfApplyCurveToCond", + "KfConditioningAdd", + "KfConditioningAddx10", + "KfCurveConstant", + "KfCurveDraw", + "KfCurveFromString", + "KfCurveFromYAML", + "KfCurveInverse", + "KfCurveToAcnLatentKeyframe", + "KfCurvesAdd", + "KfCurvesAddx10", + "KfCurvesDivide", + "KfCurvesMultiply", + "KfCurvesMultiplyx10", + "KfCurvesSubtract", + "KfDebug_Clip", + "KfDebug_Cond", + "KfDebug_Curve", + "KfDebug_Float", + "KfDebug_Image", + "KfDebug_Int", + "KfDebug_Latent", + "KfDebug_Model", + "KfDebug_Passthrough", + "KfDebug_Segs", + "KfDebug_String", + "KfDebug_Vae", + "KfDrawSchedule", + "KfEvaluateCurveAtT", + "KfGetCurveFromPGroup", + "KfGetScheduleConditionAtTime", + "KfGetScheduleConditionSlice", + "KfKeyframedCondition", + "KfKeyframedConditionWithText", + "KfPGroupCurveAdd", + "KfPGroupCurveMultiply", + "KfPGroupDraw", + "KfPGroupProd", + "KfPGroupSum", + "KfSetCurveLabel", + "KfSetKeyframe", + "KfSinusoidalAdjustAmplitude", + "KfSinusoidalAdjustFrequency", + "KfSinusoidalAdjustPhase", + "KfSinusoidalAdjustWavelength", + "KfSinusoidalEntangledZeroOneFromFrequencyx2", + "KfSinusoidalEntangledZeroOneFromFrequencyx3", + "KfSinusoidalEntangledZeroOneFromFrequencyx4", + "KfSinusoidalEntangledZeroOneFromFrequencyx5", + "KfSinusoidalEntangledZeroOneFromFrequencyx6", + "KfSinusoidalEntangledZeroOneFromFrequencyx7", + "KfSinusoidalEntangledZeroOneFromFrequencyx8", + "KfSinusoidalEntangledZeroOneFromFrequencyx9", + "KfSinusoidalEntangledZeroOneFromWavelengthx2", + "KfSinusoidalEntangledZeroOneFromWavelengthx3", + "KfSinusoidalEntangledZeroOneFromWavelengthx4", + "KfSinusoidalEntangledZeroOneFromWavelengthx5", + "KfSinusoidalEntangledZeroOneFromWavelengthx6", + "KfSinusoidalEntangledZeroOneFromWavelengthx7", + "KfSinusoidalEntangledZeroOneFromWavelengthx8", + "KfSinusoidalEntangledZeroOneFromWavelengthx9", + "KfSinusoidalGetAmplitude", + "KfSinusoidalGetFrequency", + "KfSinusoidalGetPhase", + "KfSinusoidalGetWavelength", + "KfSinusoidalWithFrequency", + "KfSinusoidalWithWavelength" + ], + { + "title_aux": "ComfyUI-Keyframed" + } + ], + "https://github.com/drago87/ComfyUI_Dragos_Nodes": [ + [ + "file_padding", + "image_info", + "lora_loader", + "vae_loader" + ], + { + "title_aux": "ComfyUI_Dragos_Nodes" + } + ], + "https://github.com/drustan-hawk/primitive-types": [ + [ + "float", + "int", + "string", + "string_multiline" + ], + { + "title_aux": "primitive-types" + } + ], + "https://github.com/ealkanat/comfyui_easy_padding": [ + [ + "comfyui-easy-padding" + ], + { + "title_aux": "ComfyUI Easy Padding" + } + ], + "https://github.com/edenartlab/eden_comfy_pipelines": [ + [ + "CLIP_Interrogator", + "Eden_Bool", + "Eden_Compare", + "Eden_DebugPrint", + "Eden_Float", + "Eden_Int", + "Eden_String", + "Filepicker", + "IMG_blender", + "IMG_padder", + "IMG_scaler", + "IMG_unpadder", + "If ANY execute A else B", + "LatentTypeConversion", + "SaveImageAdvanced", + "VAEDecode_to_folder" + ], + { + "title_aux": "eden_comfy_pipelines" + } + ], + "https://github.com/evanspearman/ComfyMath": [ + [ + "CM_BoolBinaryOperation", + "CM_BoolToInt", + "CM_BoolUnaryOperation", + "CM_BreakoutVec2", + "CM_BreakoutVec3", + "CM_BreakoutVec4", + "CM_ComposeVec2", + "CM_ComposeVec3", + "CM_ComposeVec4", + "CM_FloatBinaryCondition", + "CM_FloatBinaryOperation", + "CM_FloatToInt", + "CM_FloatToNumber", + "CM_FloatUnaryCondition", + "CM_FloatUnaryOperation", + "CM_IntBinaryCondition", + "CM_IntBinaryOperation", + "CM_IntToBool", + "CM_IntToFloat", + "CM_IntToNumber", + "CM_IntUnaryCondition", + "CM_IntUnaryOperation", + "CM_NearestSDXLResolution", + "CM_NumberBinaryCondition", + "CM_NumberBinaryOperation", + "CM_NumberToFloat", + "CM_NumberToInt", + "CM_NumberUnaryCondition", + "CM_NumberUnaryOperation", + "CM_SDXLResolution", + "CM_Vec2BinaryCondition", + "CM_Vec2BinaryOperation", + "CM_Vec2ScalarOperation", + "CM_Vec2ToScalarBinaryOperation", + "CM_Vec2ToScalarUnaryOperation", + "CM_Vec2UnaryCondition", + "CM_Vec2UnaryOperation", + "CM_Vec3BinaryCondition", + "CM_Vec3BinaryOperation", + "CM_Vec3ScalarOperation", + "CM_Vec3ToScalarBinaryOperation", + "CM_Vec3ToScalarUnaryOperation", + "CM_Vec3UnaryCondition", + "CM_Vec3UnaryOperation", + "CM_Vec4BinaryCondition", + "CM_Vec4BinaryOperation", + "CM_Vec4ScalarOperation", + "CM_Vec4ToScalarBinaryOperation", + "CM_Vec4ToScalarUnaryOperation", + "CM_Vec4UnaryCondition", + "CM_Vec4UnaryOperation" + ], + { + "title_aux": "ComfyMath" + } + ], + "https://github.com/fearnworks/ComfyUI_FearnworksNodes/raw/main/fw_nodes.py": [ + [ + "Count Files in Directory (FW)", + "Count Tokens (FW)", + "Token Count Ranker(FW)", + "Trim To Tokens (FW)" + ], + { + "title_aux": "Fearnworks Custom Nodes" + } + ], + "https://github.com/fexli/fexli-util-node-comfyui": [ + [ + "FEBCPrompt", + "FEBatchGenStringBCDocker", + "FEColor2Image", + "FEColorOut", + "FEDataInsertor", + "FEDataPacker", + "FEDataUnpacker", + "FEDeepClone", + "FEDictPacker", + "FEDictUnpacker", + "FEEncLoraLoader", + "FEExtraInfoAdd", + "FEGenStringBCDocker", + "FEGenStringGPT", + "FEImageNoiseGenerate", + "FEImagePadForOutpaint", + "FEImagePadForOutpaintByImage", + "FEOperatorIf", + "FEPythonStrOp", + "FERandomLoraSelect", + "FERandomPrompt", + "FERandomizedColor2Image", + "FERandomizedColorOut", + "FERerouteWithName", + "FESaveEncryptImage", + "FETextCombine", + "FETextInput" + ], + { + "title_aux": "fexli-util-node-comfyui" + } + ], + "https://github.com/filipemeneses/comfy_pixelization": [ + [ + "Pixelization" + ], + { + "title_aux": "Pixelization" + } + ], + "https://github.com/filliptm/ComfyUI_Fill-Nodes": [ + [ + "FL_ImageCaptionSaver", + "FL_ImageRandomizer" + ], + { + "title_aux": "ComfyUI_Fill-Nodes" + } + ], + "https://github.com/fitCorder/fcSuite/raw/main/fcSuite.py": [ + [ + "fcFloat", + "fcFloatMatic", + "fcHex", + "fcInteger" + ], + { + "title_aux": "fcSuite" + } + ], + "https://github.com/florestefano1975/comfyui-portrait-master": [ + [ + "PortraitMaster" + ], + { + "title_aux": "comfyui-portrait-master" + } + ], + "https://github.com/florestefano1975/comfyui-prompt-composer": [ + [ + "PromptComposerCustomLists", + "PromptComposerEffect", + "PromptComposerGrouping", + "PromptComposerMerge", + "PromptComposerStyler", + "PromptComposerTextSingle", + "promptComposerTextMultiple" + ], + { + "title_aux": "comfyui-prompt-composer" + } + ], + "https://github.com/flowtyone/ComfyUI-Flowty-LDSR": [ + [ + "LDSRModelLoader", + "LDSRUpscale", + "LDSRUpscaler" + ], + { + "title_aux": "ComfyUI-Flowty-LDSR" + } + ], + "https://github.com/flyingshutter/As_ComfyUI_CustomNodes": [ + [ + "BatchIndex_AS", + "CropImage_AS", + "ImageMixMasked_As", + "ImageToMask_AS", + "Increment_AS", + "Int2Any_AS", + "LatentAdd_AS", + "LatentMixMasked_As", + "LatentMix_AS", + "LatentToImages_AS", + "LoadLatent_AS", + "MapRange_AS", + "MaskToImage_AS", + "Math_AS", + "NoiseImage_AS", + "Number2Float_AS", + "Number2Int_AS", + "Number_AS", + "SaveLatent_AS", + "TextToImage_AS", + "TextWildcardList_AS" + ], + { + "title_aux": "As_ComfyUI_CustomNodes" + } + ], + "https://github.com/foxtrot-roger/comfyui-rf-nodes": [ + [ + "LogBool", + "LogFloat", + "LogInt", + "LogNumber", + "LogString", + "LogVec2", + "LogVec3", + "RF_AtIndexString", + "RF_BoolToString", + "RF_FloatToString", + "RF_IntToString", + "RF_JsonStyleLoader", + "RF_MergeLines", + "RF_NumberToString", + "RF_OptionsString", + "RF_RangeFloat", + "RF_RangeInt", + "RF_RangeNumber", + "RF_SavePromptInfo", + "RF_SplitLines", + "RF_TextConcatenate", + "RF_TextInput", + "RF_TextReplace", + "RF_Timestamp", + "RF_ToString", + "RF_Vec2ToString", + "RF_Vec3ToString", + "TextLine" + ], + { + "title_aux": "RF Nodes" + } + ], + "https://github.com/gemell1/ComfyUI_GMIC": [ + [ + "GmicCliWrapper" + ], + { + "title_aux": "ComfyUI_GMIC" + } + ], + "https://github.com/giriss/comfy-image-saver": [ + [ + "Cfg Literal", + "Checkpoint Selector", + "Int Literal", + "Sampler Selector", + "Save Image w/Metadata", + "Scheduler Selector", + "Seed Generator", + "String Literal", + "Width/Height Literal" + ], + { + "title_aux": "Save Image with Generation Metadata" + } + ], + "https://github.com/glibsonoran/Plush-for-ComfyUI": [ + [ + "DalleImage", + "Enhancer", + "ImgTextSwitch", + "Plush-Exif Wrangler", + "mulTextSwitch" + ], + { + "title_aux": "Plush-for-ComfyUI" + } + ], + "https://github.com/glifxyz/ComfyUI-GlifNodes": [ + [ + "GlifConsistencyDecoder", + "GlifPatchConsistencyDecoderTiled", + "SDXLAspectRatio" + ], + { + "title_aux": "ComfyUI-GlifNodes" + } + ], + "https://github.com/glowcone/comfyui-base64-to-image": [ + [ + "LoadImageFromBase64" + ], + { + "title_aux": "Load Image From Base64 URI" + } + ], + "https://github.com/godspede/ComfyUI_Substring": [ + [ + "SubstringTheory" + ], + { + "title_aux": "ComfyUI Substring" + } + ], + "https://github.com/gokayfem/ComfyUI_VLM_nodes": [ + [ + "Joytag", + "JsonToText", + "KeywordExtraction", + "LLMLoader", + "LLMPromptGenerator", + "LLMSampler", + "LLava Loader Simple", + "LLavaPromptGenerator", + "LLavaSamplerAdvanced", + "LLavaSamplerSimple", + "LlavaClipLoader", + "MoonDream", + "PromptGenerateAPI", + "SimpleText", + "Suggester", + "ViewText" + ], + { + "title_aux": "VLM_nodes" + } + ], + "https://github.com/guoyk93/yk-node-suite-comfyui": [ + [ + "YKImagePadForOutpaint", + "YKMaskToImage" + ], + { + "title_aux": "y.k.'s ComfyUI node suite" + } + ], + "https://github.com/hhhzzyang/Comfyui_Lama": [ + [ + "LamaApply", + "LamaModelLoader", + "YamlConfigLoader" + ], + { + "title_aux": "Comfyui-Lama" + } + ], + "https://github.com/hinablue/ComfyUI_3dPoseEditor": [ + [ + "Hina.PoseEditor3D" + ], + { + "title_aux": "ComfyUI 3D Pose Editor" + } + ], + "https://github.com/hustille/ComfyUI_Fooocus_KSampler": [ + [ + "KSampler With Refiner (Fooocus)" + ], + { + "title_aux": "ComfyUI_Fooocus_KSampler" + } + ], + "https://github.com/hustille/ComfyUI_hus_utils": [ + [ + "3way Prompt Styler", + "Batch State", + "Date Time Format", + "Debug Extra", + "Fetch widget value", + "Text Hash" + ], + { + "title_aux": "hus' utils for ComfyUI" + } + ], + "https://github.com/hylarucoder/ComfyUI-Eagle-PNGInfo": [ + [ + "EagleImageNode", + "SDXLPromptStyler", + "SDXLPromptStylerAdvanced", + "SDXLResolutionPresets" + ], + { + "title_aux": "Eagle PNGInfo" + } + ], + "https://github.com/idrirap/ComfyUI-Lora-Auto-Trigger-Words": [ + [ + "FusionText", + "LoraListNames", + "LoraLoaderAdvanced", + "LoraLoaderStackedAdvanced", + "LoraLoaderStackedVanilla", + "LoraLoaderVanilla", + "LoraTagsOnly", + "Randomizer", + "TagsFormater", + "TagsSelector", + "TextInputBasic" + ], + { + "title_aux": "ComfyUI-Lora-Auto-Trigger-Words" + } + ], + "https://github.com/imb101/ComfyUI-FaceSwap": [ + [ + "FaceSwapNode" + ], + { + "title_aux": "FaceSwap" + } + ], + "https://github.com/jags111/ComfyUI_Jags_Audiotools": [ + [ + "BatchJoinAudio", + "BatchToList", + "BitCrushAudioFX", + "BulkVariation", + "ChorusAudioFX", + "ClippingAudioFX", + "CompressorAudioFX", + "ConcatAudioList", + "ConvolutionAudioFX", + "CutAudio", + "DelayAudioFX", + "DistortionAudioFX", + "DuplicateAudio", + "GainAudioFX", + "GenerateAudioSample", + "GenerateAudioWave", + "GetAudioFromFolderIndex", + "GetSingle", + "GetStringByIndex", + "HighShelfFilter", + "HighpassFilter", + "ImageToSpectral", + "InvertAudioFX", + "JoinAudio", + "LadderFilter", + "LimiterAudioFX", + "ListToBatch", + "LoadAudioDir", + "LoadAudioFile", + "LoadAudioModel (DD)", + "LoadVST3", + "LowShelfFilter", + "LowpassFilter", + "MP3CompressorAudioFX", + "MixAudioTensors", + "NoiseGateAudioFX", + "OTTAudioFX", + "PeakFilter", + "PhaserEffectAudioFX", + "PitchShiftAudioFX", + "PlotSpectrogram", + "PreviewAudioFile", + "PreviewAudioTensor", + "ResampleAudio", + "ReverbAudioFX", + "ReverseAudio", + "SaveAudioTensor", + "SequenceVariation", + "SliceAudio", + "SoundPlayer", + "StretchAudio", + "samplerate" + ], + { + "author": "jags111", + "description": "This extension offers various audio generation tools", + "nickname": "Audiotools", + "title": "Jags_Audiotools", + "title_aux": "ComfyUI_Jags_Audiotools" + } + ], + "https://github.com/jags111/ComfyUI_Jags_VectorMagic": [ + [ + "CircularVAEDecode", + "JagsCLIPSeg", + "JagsClipseg", + "JagsCombineMasks", + "SVG", + "YoloSEGdetectionNode", + "YoloSegNode", + "color_drop", + "my unique name", + "xy_Tiling_KSampler" + ], + { + "author": "jags111", + "description": "This extension offers various vector manipulation and generation tools", + "nickname": "Jags_VectorMagic", + "title": "Jags_VectorMagic", + "title_aux": "ComfyUI_Jags_VectorMagic" + } + ], + "https://github.com/jags111/efficiency-nodes-comfyui": [ + [ + "AnimateDiff Script", + "Apply ControlNet Stack", + "Control Net Stacker", + "Eff. Loader SDXL", + "Efficient Loader", + "HighRes-Fix Script", + "Image Overlay", + "Join XY Inputs of Same Type", + "KSampler (Efficient)", + "KSampler Adv. (Efficient)", + "KSampler SDXL (Eff.)", + "LatentUpscaler", + "LoRA Stack to String converter", + "LoRA Stacker", + "Manual XY Entry Info", + "NNLatentUpscale", + "Noise Control Script", + "Pack SDXL Tuple", + "Tiled Upscaler Script", + "Unpack SDXL Tuple", + "XY Input: Add/Return Noise", + "XY Input: Aesthetic Score", + "XY Input: CFG Scale", + "XY Input: Checkpoint", + "XY Input: Clip Skip", + "XY Input: Control Net", + "XY Input: Control Net Plot", + "XY Input: Denoise", + "XY Input: LoRA", + "XY Input: LoRA Plot", + "XY Input: LoRA Stacks", + "XY Input: Manual XY Entry", + "XY Input: Prompt S/R", + "XY Input: Refiner On/Off", + "XY Input: Sampler/Scheduler", + "XY Input: Seeds++ Batch", + "XY Input: Steps", + "XY Input: VAE", + "XY Plot" + ], + { + "title_aux": "Efficiency Nodes for ComfyUI Version 2.0+" + } + ], + "https://github.com/jamal-alkharrat/ComfyUI_rotate_image": [ + [ + "RotateImage" + ], + { + "title_aux": "ComfyUI_rotate_image" + } + ], + "https://github.com/jamesWalker55/comfyui-various": [ + [], + { + "nodename_pattern": "^JW", + "title_aux": "Various ComfyUI Nodes by Type" + } + ], + "https://github.com/jesenzhang/ComfyUI_StreamDiffusion": [ + [ + "StreamDiffusion_Loader", + "StreamDiffusion_Sampler" + ], + { + "title_aux": "ComfyUI_StreamDiffusion" + } + ], + "https://github.com/jitcoder/lora-info": [ + [ + "ImageFromURL", + "LoraInfo" + ], + { + "title_aux": "LoraInfo" + } + ], + "https://github.com/jjkramhoeft/ComfyUI-Jjk-Nodes": [ + [ + "JjkConcat", + "JjkShowText", + "JjkText", + "SDXLRecommendedImageSize" + ], + { + "title_aux": "ComfyUI-Jjk-Nodes" + } + ], + "https://github.com/jojkaart/ComfyUI-sampler-lcm-alternative": [ + [ + "LCMScheduler", + "SamplerLCMAlternative", + "SamplerLCMCycle" + ], + { + "title_aux": "ComfyUI-sampler-lcm-alternative" + } + ], + "https://github.com/jordoh/ComfyUI-Deepface": [ + [ + "DeepfaceExtractFaces", + "DeepfaceVerify" + ], + { + "title_aux": "ComfyUI Deepface" + } + ], + "https://github.com/jtrue/ComfyUI-JaRue": [ + [ + "Text2Image_jru", + "YouTube2Prompt_jru" + ], + { + "nodename_pattern": "_jru$", + "title_aux": "ComfyUI-JaRue" + } + ], + "https://github.com/ka-puna/comfyui-yanc": [ + [ + "YANC.ConcatStrings", + "YANC.FormatDatetimeString", + "YANC.GetWidgetValueString", + "YANC.IntegerCaster", + "YANC.MultilineString", + "YANC.TruncateString" + ], + { + "title_aux": "comfyui-yanc" + } + ], + "https://github.com/kadirnar/ComfyUI-Transformers": [ + [ + "DepthEstimationPipeline", + "ImageClassificationPipeline", + "ImageSegmentationPipeline", + "ObjectDetectionPipeline" + ], + { + "title_aux": "ComfyUI-Transformers" + } + ], + "https://github.com/kenjiqq/qq-nodes-comfyui": [ + [ + "Any List", + "Axis Pack", + "Axis Unpack", + "Image Accumulator End", + "Image Accumulator Start", + "Load Lines From Text File", + "Slice List", + "Text Splitter", + "XY Grid Helper" + ], + { + "title_aux": "qq-nodes-comfyui" + } + ], + "https://github.com/kft334/Knodes": [ + [ + "Image(s) To Websocket (Base64)", + "ImageOutput", + "Load Image (Base64)", + "Load Images (Base64)" + ], + { + "title_aux": "Knodes" + } + ], + "https://github.com/kijai/ComfyUI-CCSR": [ + [ + "CCSR_Model_Select", + "CCSR_Upscale" + ], + { + "title_aux": "ComfyUI-CCSR" + } + ], + "https://github.com/kijai/ComfyUI-DDColor": [ + [ + "DDColor_Colorize" + ], + { + "title_aux": "ComfyUI-DDColor" + } + ], + "https://github.com/kijai/ComfyUI-KJNodes": [ + [ + "AddLabel", + "BatchCLIPSeg", + "BatchCropFromMask", + "BatchCropFromMaskAdvanced", + "BatchUncrop", + "BatchUncropAdvanced", + "BboxToInt", + "ColorMatch", + "ColorToMask", + "CondPassThrough", + "ConditioningMultiCombine", + "ConditioningSetMaskAndCombine", + "ConditioningSetMaskAndCombine3", + "ConditioningSetMaskAndCombine4", + "ConditioningSetMaskAndCombine5", + "CreateAudioMask", + "CreateFadeMask", + "CreateFadeMaskAdvanced", + "CreateFluidMask", + "CreateGradientMask", + "CreateMagicMask", + "CreateShapeMask", + "CreateTextMask", + "CreateVoronoiMask", + "CrossFadeImages", + "DummyLatentOut", + "EffnetEncode", + "EmptyLatentImagePresets", + "FilterZeroMasksAndCorrespondingImages", + "FlipSigmasAdjusted", + "FloatConstant", + "GLIGENTextBoxApplyBatch", + "GenerateNoise", + "GetImageRangeFromBatch", + "GetImagesFromBatchIndexed", + "GetLatentsFromBatchIndexed", + "GrowMaskWithBlur", + "INTConstant", + "ImageBatchRepeatInterleaving", + "ImageBatchTestPattern", + "ImageConcanate", + "ImageGrabPIL", + "ImageGridComposite2x2", + "ImageGridComposite3x3", + "ImageTransformByNormalizedAmplitude", + "ImageUpscaleWithModelBatched", + "InjectNoiseToLatent", + "InsertImageBatchByIndexes", + "NormalizeLatent", + "NormalizedAmplitudeToMask", + "OffsetMask", + "OffsetMaskByNormalizedAmplitude", + "ReferenceOnlySimple3", + "ReplaceImagesInBatch", + "ResizeMask", + "ReverseImageBatch", + "RoundMask", + "SaveImageWithAlpha", + "ScaleBatchPromptSchedule", + "SomethingToString", + "SoundReactive", + "SplitBboxes", + "StableZero123_BatchSchedule", + "StringConstant", + "VRAM_Debug", + "WidgetToString" + ], + { + "title_aux": "KJNodes for ComfyUI" + } + ], + "https://github.com/kijai/ComfyUI-Marigold": [ + [ + "ColorizeDepthmap", + "MarigoldDepthEstimation", + "RemapDepth", + "SaveImageOpenEXR" + ], + { + "title_aux": "Marigold depth estimation in ComfyUI" + } + ], + "https://github.com/kijai/ComfyUI-SVD": [ + [ + "SVDimg2vid" + ], + { + "title_aux": "ComfyUI-SVD" + } + ], + "https://github.com/kinfolk0117/ComfyUI_GradientDeepShrink": [ + [ + "GradientPatchModelAddDownscale", + "GradientPatchModelAddDownscaleAdvanced" + ], + { + "title_aux": "ComfyUI_GradientDeepShrink" + } + ], + "https://github.com/kinfolk0117/ComfyUI_Pilgram": [ + [ + "Pilgram" + ], + { + "title_aux": "ComfyUI_Pilgram" + } + ], + "https://github.com/kinfolk0117/ComfyUI_SimpleTiles": [ + [ + "DynamicTileMerge", + "DynamicTileSplit", + "TileCalc", + "TileMerge", + "TileSplit" + ], + { + "title_aux": "SimpleTiles" + } + ], + "https://github.com/kinfolk0117/ComfyUI_TiledIPAdapter": [ + [ + "TiledIPAdapter" + ], + { + "title_aux": "TiledIPAdapter" + } + ], + "https://github.com/knuknX/ComfyUI-Image-Tools": [ + [ + "BatchImagePathLoader", + "ImageBgRemoveProcessor", + "ImageCheveretoUploader", + "ImageStandardResizeProcessor", + "JSONMessageNotifyTool", + "PreviewJSONNode", + "SingleImagePathLoader", + "SingleImageUrlLoader" + ], + { + "title_aux": "ComfyUI-Image-Tools" + } + ], + "https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI": [ + [ + "LLLiteLoader" + ], + { + "title_aux": "ControlNet-LLLite-ComfyUI" + } + ], + "https://github.com/komojini/ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes": [ + [ + "S3 Bucket LoRA", + "S3Bucket_Load_LoRA", + "XL DreamBooth LoRA", + "XLDB_LoRA" + ], + { + "title_aux": "ComfyUI_SDXL_DreamBooth_LoRA_CustomNodes" + } + ], + "https://github.com/komojini/komojini-comfyui-nodes": [ + [ + "BatchCreativeInterpolationNodeDynamicSettings", + "CachedGetter", + "DragNUWAImageCanvas", + "FlowBuilder", + "FlowBuilder (adv)", + "FlowBuilder (advanced)", + "FlowBuilder (advanced) Setter", + "FlowBuilderSetter", + "FlowBuilderSetter (adv)", + "Getter", + "ImageCropByRatio", + "ImageCropByRatioAndResize", + "ImageGetter", + "ImageMerger", + "ImagesCropByRatioAndResizeBatch", + "KSamplerAdvancedCacheable", + "KSamplerCacheable", + "Setter", + "UltimateVideoLoader", + "UltimateVideoLoader (simple)", + "YouTubeVideoLoader" + ], + { + "title_aux": "komojini-comfyui-nodes" + } + ], + "https://github.com/kwaroran/abg-comfyui": [ + [ + "Remove Image Background (abg)" + ], + { + "title_aux": "abg-comfyui" + } + ], + "https://github.com/laksjdjf/LCMSampler-ComfyUI": [ + [ + "SamplerLCM", + "TAESDLoader" + ], + { + "title_aux": "LCMSampler-ComfyUI" + } + ], + "https://github.com/laksjdjf/LoRA-Merger-ComfyUI": [ + [ + "LoraLoaderFromWeight", + "LoraLoaderWeightOnly", + "LoraMerge", + "LoraSave" + ], + { + "title_aux": "LoRA-Merger-ComfyUI" + } + ], + "https://github.com/laksjdjf/attention-couple-ComfyUI": [ + [ + "Attention couple" + ], + { + "title_aux": "attention-couple-ComfyUI" + } + ], + "https://github.com/laksjdjf/cd-tuner_negpip-ComfyUI": [ + [ + "CDTuner", + "Negapip", + "Negpip" + ], + { + "title_aux": "cd-tuner_negpip-ComfyUI" + } + ], + "https://github.com/laksjdjf/pfg-ComfyUI": [ + [ + "PFG" + ], + { + "title_aux": "pfg-ComfyUI" + } + ], + "https://github.com/lilly1987/ComfyUI_node_Lilly": [ + [ + "CheckpointLoaderSimpleText", + "LoraLoaderText", + "LoraLoaderTextRandom", + "Random_Sampler", + "VAELoaderDecode" + ], + { + "title_aux": "simple wildcard for ComfyUI" + } + ], + "https://github.com/lldacing/comfyui-easyapi-nodes": [ + [ + "Base64ToImage", + "Base64ToMask", + "ImageToBase64", + "ImageToBase64Advanced", + "LoadImageFromURL", + "LoadImageToBase64", + "LoadMaskFromURL", + "MaskImageToBase64", + "MaskToBase64", + "MaskToBase64Image", + "SamAutoMaskSEGS" + ], + { + "title_aux": "comfyui-easyapi-nodes" + } + ], + "https://github.com/longgui0318/comfyui-mask-util": [ + [ + "Mask Region Info", + "Mask Selection Of Masks", + "Split Masks" + ], + { + "title_aux": "comfyui-mask-util" + } + ], + "https://github.com/lordgasmic/ComfyUI-Wildcards/raw/master/wildcards.py": [ + [ + "CLIPTextEncodeWithWildcards" + ], + { + "title_aux": "Wildcards" + } + ], + "https://github.com/lrzjason/ComfyUIJasonNode/raw/main/SDXLMixSampler.py": [ + [ + "SDXLMixSampler" + ], + { + "title_aux": "ComfyUIJasonNode" + } + ], + "https://github.com/ltdrdata/ComfyUI-Impact-Pack": [ + [ + "AddMask", + "BasicPipeToDetailerPipe", + "BasicPipeToDetailerPipeSDXL", + "BboxDetectorCombined", + "BboxDetectorCombined_v2", + "BboxDetectorForEach", + "BboxDetectorSEGS", + "BitwiseAndMask", + "BitwiseAndMaskForEach", + "CLIPSegDetectorProvider", + "CfgScheduleHookProvider", + "CombineRegionalPrompts", + "CoreMLDetailerHookProvider", + "DenoiseScheduleHookProvider", + "DenoiseSchedulerDetailerHookProvider", + "DetailerForEach", + "DetailerForEachDebug", + "DetailerForEachDebugPipe", + "DetailerForEachPipe", + "DetailerForEachPipeForAnimateDiff", + "DetailerHookCombine", + "DetailerPipeToBasicPipe", + "EditBasicPipe", + "EditDetailerPipe", + "EditDetailerPipeSDXL", + "EmptySegs", + "FaceDetailer", + "FaceDetailerPipe", + "FromBasicPipe", + "FromBasicPipe_v2", + "FromDetailerPipe", + "FromDetailerPipeSDXL", + "FromDetailerPipe_v2", + "ImageListToImageBatch", + "ImageMaskSwitch", + "ImageReceiver", + "ImageSender", + "ImpactAssembleSEGS", + "ImpactCombineConditionings", + "ImpactCompare", + "ImpactConcatConditionings", + "ImpactConditionalBranch", + "ImpactConditionalBranchSelMode", + "ImpactConditionalStopIteration", + "ImpactControlBridge", + "ImpactControlNetApplyAdvancedSEGS", + "ImpactControlNetApplySEGS", + "ImpactControlNetClearSEGS", + "ImpactConvertDataType", + "ImpactDecomposeSEGS", + "ImpactDilateMask", + "ImpactDilateMaskInSEGS", + "ImpactDilate_Mask_SEG_ELT", + "ImpactDummyInput", + "ImpactEdit_SEG_ELT", + "ImpactFloat", + "ImpactFrom_SEG_ELT", + "ImpactGaussianBlurMask", + "ImpactGaussianBlurMaskInSEGS", + "ImpactHFTransformersClassifierProvider", + "ImpactIfNone", + "ImpactImageBatchToImageList", + "ImpactImageInfo", + "ImpactInt", + "ImpactInversedSwitch", + "ImpactIsNotEmptySEGS", + "ImpactKSamplerAdvancedBasicPipe", + "ImpactKSamplerBasicPipe", + "ImpactLatentInfo", + "ImpactLogger", + "ImpactLogicalOperators", + "ImpactMakeImageBatch", + "ImpactMakeImageList", + "ImpactMakeTileSEGS", + "ImpactMinMax", + "ImpactNeg", + "ImpactNodeSetMuteState", + "ImpactQueueTrigger", + "ImpactQueueTriggerCountdown", + "ImpactRemoteBoolean", + "ImpactRemoteInt", + "ImpactSEGSClassify", + "ImpactSEGSConcat", + "ImpactSEGSLabelFilter", + "ImpactSEGSOrderedFilter", + "ImpactSEGSPicker", + "ImpactSEGSRangeFilter", + "ImpactSEGSToMaskBatch", + "ImpactSEGSToMaskList", + "ImpactScaleBy_BBOX_SEG_ELT", + "ImpactSegsAndMask", + "ImpactSegsAndMaskForEach", + "ImpactSetWidgetValue", + "ImpactSimpleDetectorSEGS", + "ImpactSimpleDetectorSEGSPipe", + "ImpactSimpleDetectorSEGS_for_AD", + "ImpactSleep", + "ImpactStringSelector", + "ImpactSwitch", + "ImpactValueReceiver", + "ImpactValueSender", + "ImpactWildcardEncode", + "ImpactWildcardProcessor", + "IterativeImageUpscale", + "IterativeLatentUpscale", + "KSamplerAdvancedProvider", + "KSamplerProvider", + "LatentPixelScale", + "LatentReceiver", + "LatentSender", + "LatentSwitch", + "MMDetDetectorProvider", + "MMDetLoader", + "MaskDetailerPipe", + "MaskListToMaskBatch", + "MaskPainter", + "MaskToSEGS", + "MaskToSEGS_for_AnimateDiff", + "MasksToMaskList", + "MediaPipeFaceMeshToSEGS", + "NoiseInjectionDetailerHookProvider", + "NoiseInjectionHookProvider", + "ONNXDetectorProvider", + "ONNXDetectorSEGS", + "PixelKSampleHookCombine", + "PixelKSampleUpscalerProvider", + "PixelKSampleUpscalerProviderPipe", + "PixelTiledKSampleUpscalerProvider", + "PixelTiledKSampleUpscalerProviderPipe", + "PreviewBridge", + "PreviewBridgeLatent", + "PreviewDetailerHookProvider", + "ReencodeLatent", + "ReencodeLatentPipe", + "RegionalPrompt", + "RegionalSampler", + "RegionalSamplerAdvanced", + "RemoveImageFromSEGS", + "RemoveNoiseMask", + "SAMDetectorCombined", + "SAMDetectorSegmented", + "SAMLoader", + "SEGSDetailer", + "SEGSDetailerForAnimateDiff", + "SEGSLabelFilterDetailerHookProvider", + "SEGSOrderedFilterDetailerHookProvider", + "SEGSPaste", + "SEGSPreview", + "SEGSPreviewCNet", + "SEGSRangeFilterDetailerHookProvider", + "SEGSSwitch", + "SEGSToImageList", + "SegmDetectorCombined", + "SegmDetectorCombined_v2", + "SegmDetectorForEach", + "SegmDetectorSEGS", + "Segs Mask", + "Segs Mask ForEach", + "SegsMaskCombine", + "SegsToCombinedMask", + "SetDefaultImageForSEGS", + "StepsScheduleHookProvider", + "SubtractMask", + "SubtractMaskForEach", + "TiledKSamplerProvider", + "ToBasicPipe", + "ToBinaryMask", + "ToDetailerPipe", + "ToDetailerPipeSDXL", + "TwoAdvancedSamplersForMask", + "TwoSamplersForMask", + "TwoSamplersForMaskUpscalerProvider", + "TwoSamplersForMaskUpscalerProviderPipe", + "UltralyticsDetectorProvider", + "UnsamplerDetailerHookProvider", + "UnsamplerHookProvider" + ], + { + "author": "Dr.Lt.Data", + "description": "This extension offers various detector nodes and detailer nodes that allow you to configure a workflow that automatically enhances facial details. And provide iterative upscaler.", + "nickname": "Impact Pack", + "title": "Impact Pack", + "title_aux": "ComfyUI Impact Pack" + } + ], + "https://github.com/ltdrdata/ComfyUI-Inspire-Pack": [ + [ + "AnimeLineArt_Preprocessor_Provider_for_SEGS //Inspire", + "ApplyRegionalIPAdapters //Inspire", + "BindImageListPromptList //Inspire", + "CLIPTextEncodeWithWeight //Inspire", + "CacheBackendData //Inspire", + "CacheBackendDataList //Inspire", + "CacheBackendDataNumberKey //Inspire", + "CacheBackendDataNumberKeyList //Inspire", + "Canny_Preprocessor_Provider_for_SEGS //Inspire", + "ChangeImageBatchSize //Inspire", + "CheckpointLoaderSimpleShared //Inspire", + "Color_Preprocessor_Provider_for_SEGS //Inspire", + "ConcatConditioningsWithMultiplier //Inspire", + "DWPreprocessor_Provider_for_SEGS //Inspire", + "FakeScribblePreprocessor_Provider_for_SEGS //Inspire", + "FloatRange //Inspire", + "FromIPAdapterPipe //Inspire", + "GlobalSampler //Inspire", + "GlobalSeed //Inspire", + "HEDPreprocessor_Provider_for_SEGS //Inspire", + "HyperTile //Inspire", + "IPAdapterModelHelper //Inspire", + "ImageBatchSplitter //Inspire", + "InpaintPreprocessor_Provider_for_SEGS //Inspire", + "KSampler //Inspire", + "KSamplerAdvanced //Inspire", + "KSamplerAdvancedPipe //Inspire", + "KSamplerAdvancedProgress //Inspire", + "KSamplerPipe //Inspire", + "KSamplerProgress //Inspire", + "LatentBatchSplitter //Inspire", + "LeRes_DepthMap_Preprocessor_Provider_for_SEGS //Inspire", + "LineArt_Preprocessor_Provider_for_SEGS //Inspire", + "ListCounter //Inspire", + "LoadImage //Inspire", + "LoadImageListFromDir //Inspire", + "LoadImagesFromDir //Inspire", + "LoadPromptsFromDir //Inspire", + "LoadPromptsFromFile //Inspire", + "LoadSinglePromptFromFile //Inspire", + "LoraBlockInfo //Inspire", + "LoraLoaderBlockWeight //Inspire", + "MakeBasicPipe //Inspire", + "Manga2Anime_LineArt_Preprocessor_Provider_for_SEGS //Inspire", + "MediaPipeFaceMeshDetectorProvider //Inspire", + "MediaPipe_FaceMesh_Preprocessor_Provider_for_SEGS //Inspire", + "MeshGraphormerDepthMapPreprocessorProvider_for_SEGS //Inspire", + "MiDaS_DepthMap_Preprocessor_Provider_for_SEGS //Inspire", + "OpenPose_Preprocessor_Provider_for_SEGS //Inspire", + "PromptBuilder //Inspire", + "PromptExtractor //Inspire", + "RandomGeneratorForList //Inspire", + "RegionalConditioningColorMask //Inspire", + "RegionalConditioningSimple //Inspire", + "RegionalIPAdapterColorMask //Inspire", + "RegionalIPAdapterEncodedColorMask //Inspire", + "RegionalIPAdapterEncodedMask //Inspire", + "RegionalIPAdapterMask //Inspire", + "RegionalPromptColorMask //Inspire", + "RegionalPromptSimple //Inspire", + "RegionalSeedExplorerColorMask //Inspire", + "RegionalSeedExplorerMask //Inspire", + "RemoveBackendData //Inspire", + "RemoveBackendDataNumberKey //Inspire", + "RemoveControlNet //Inspire", + "RemoveControlNetFromRegionalPrompts //Inspire", + "RetrieveBackendData //Inspire", + "RetrieveBackendDataNumberKey //Inspire", + "SeedExplorer //Inspire", + "ShowCachedInfo //Inspire", + "TilePreprocessor_Provider_for_SEGS //Inspire", + "ToIPAdapterPipe //Inspire", + "UnzipPrompt //Inspire", + "WildcardEncode //Inspire", + "XY Input: Lora Block Weight //Inspire", + "ZipPrompt //Inspire", + "Zoe_DepthMap_Preprocessor_Provider_for_SEGS //Inspire" + ], + { + "author": "Dr.Lt.Data", + "description": "This extension provides various nodes to support Lora Block Weight and the Impact Pack.", + "nickname": "Inspire Pack", + "nodename_pattern": "Inspire$", + "title": "Inspire Pack", + "title_aux": "ComfyUI Inspire Pack" + } + ], + "https://github.com/m-sokes/ComfyUI-Sokes-Nodes": [ + [ + "Custom Date Format | sokes \ud83e\uddac", + "Latent Switch x9 | sokes \ud83e\uddac" + ], + { + "title_aux": "ComfyUI Sokes Nodes" + } + ], + "https://github.com/m957ymj75urz/ComfyUI-Custom-Nodes/raw/main/clip-text-encode-split/clip_text_encode_split.py": [ + [ + "RawText", + "RawTextCombine", + "RawTextEncode", + "RawTextReplace" + ], + { + "title_aux": "m957ymj75urz/ComfyUI-Custom-Nodes" + } + ], + "https://github.com/mape/ComfyUI-mape-Helpers": [ + [ + "mape Variable" + ], + { + "author": "mape", + "description": "Various QoL improvements like prompt tweaking, variable assignment, image preview, fuzzy search, error reporting, organizing and node navigation.", + "nickname": "\ud83d\udfe1 mape's helpers", + "title": "mape's helpers", + "title_aux": "mape's ComfyUI Helpers" + } + ], + "https://github.com/marhensa/sdxl-recommended-res-calc": [ + [ + "RecommendedResCalc" + ], + { + "title_aux": "Recommended Resolution Calculator" + } + ], + "https://github.com/martijnat/comfyui-previewlatent": [ + [ + "PreviewLatent", + "PreviewLatentAdvanced", + "PreviewLatentXL" + ], + { + "title_aux": "comfyui-previewlatent" + } + ], + "https://github.com/massao000/ComfyUI_aspect_ratios": [ + [ + "Aspect Ratios Node" + ], + { + "title_aux": "ComfyUI_aspect_ratios" + } + ], + "https://github.com/matan1905/ComfyUI-Serving-Toolkit": [ + [ + "DiscordServing", + "ServingInputNumber", + "ServingInputText", + "ServingOutput", + "WebSocketServing" + ], + { + "title_aux": "ComfyUI Serving toolkit" + } + ], + "https://github.com/mav-rik/facerestore_cf": [ + [ + "CropFace", + "FaceRestoreCFWithModel", + "FaceRestoreModelLoader" + ], + { + "title_aux": "Facerestore CF (Code Former)" + } + ], + "https://github.com/mbrostami/ComfyUI-HF": [ + [ + "GPT2Node" + ], + { + "title_aux": "ComfyUI-HF" + } + ], + "https://github.com/mcmonkeyprojects/sd-dynamic-thresholding": [ + [ + "DynamicThresholdingFull", + "DynamicThresholdingSimple" + ], + { + "title_aux": "Stable Diffusion Dynamic Thresholding (CFG Scale Fix)" + } + ], + "https://github.com/meap158/ComfyUI-Background-Replacement": [ + [ + "BackgroundReplacement", + "ImageComposite" + ], + { + "title_aux": "ComfyUI-Background-Replacement" + } + ], + "https://github.com/meap158/ComfyUI-GPU-temperature-protection": [ + [ + "GPUTemperatureProtection" + ], + { + "title_aux": "GPU temperature protection" + } + ], + "https://github.com/meap158/ComfyUI-Prompt-Expansion": [ + [ + "PromptExpansion" + ], + { + "title_aux": "ComfyUI-Prompt-Expansion" + } + ], + "https://github.com/melMass/comfy_mtb": [ + [ + "Animation Builder (mtb)", + "Any To String (mtb)", + "Batch Float (mtb)", + "Batch Float Assemble (mtb)", + "Batch Float Fill (mtb)", + "Batch Make (mtb)", + "Batch Merge (mtb)", + "Batch Shake (mtb)", + "Batch Shape (mtb)", + "Batch Transform (mtb)", + "Bbox (mtb)", + "Bbox From Mask (mtb)", + "Blur (mtb)", + "Color Correct (mtb)", + "Colored Image (mtb)", + "Concat Images (mtb)", + "Crop (mtb)", + "Debug (mtb)", + "Deep Bump (mtb)", + "Export With Ffmpeg (mtb)", + "Face Swap (mtb)", + "Film Interpolation (mtb)", + "Fit Number (mtb)", + "Float To Number (mtb)", + "Get Batch From History (mtb)", + "Image Compare (mtb)", + "Image Premultiply (mtb)", + "Image Remove Background Rembg (mtb)", + "Image Resize Factor (mtb)", + "Image Tile Offset (mtb)", + "Int To Bool (mtb)", + "Int To Number (mtb)", + "Interpolate Clip Sequential (mtb)", + "Latent Lerp (mtb)", + "Load Face Analysis Model (mtb)", + "Load Face Enhance Model (mtb)", + "Load Face Swap Model (mtb)", + "Load Film Model (mtb)", + "Load Image From Url (mtb)", + "Load Image Sequence (mtb)", + "Mask To Image (mtb)", + "Math Expression (mtb)", + "Model Patch Seamless (mtb)", + "Pick From Batch (mtb)", + "Qr Code (mtb)", + "Restore Face (mtb)", + "Save Gif (mtb)", + "Save Image Grid (mtb)", + "Save Image Sequence (mtb)", + "Save Tensors (mtb)", + "Sharpen (mtb)", + "Smart Step (mtb)", + "Stack Images (mtb)", + "String Replace (mtb)", + "Styles Loader (mtb)", + "Text To Image (mtb)", + "Transform Image (mtb)", + "Uncrop (mtb)", + "Unsplash Image (mtb)", + "Vae Decode (mtb)" + ], + { + "nodename_pattern": "\\(mtb\\)$", + "title_aux": "MTB Nodes" + } + ], + "https://github.com/mihaiiancu/ComfyUI_Inpaint": [ + [ + "InpaintMediapipe" + ], + { + "title_aux": "mihaiiancu/Inpaint" + } + ], + "https://github.com/mikkel/ComfyUI-text-overlay": [ + [ + "Image Text Overlay" + ], + { + "title_aux": "ComfyUI - Text Overlay Plugin" + } + ], + "https://github.com/mikkel/comfyui-mask-boundingbox": [ + [ + "Mask Bounding Box" + ], + { + "title_aux": "ComfyUI - Mask Bounding Box" + } + ], + "https://github.com/mlinmg/ComfyUI-LaMA-Preprocessor": [ + [ + "LaMaPreprocessor", + "lamaPreprocessor" + ], + { + "title_aux": "LaMa Preprocessor [WIP]" + } + ], + "https://github.com/modusCell/ComfyUI-dimension-node-modusCell": [ + [ + "DimensionProviderFree modusCell", + "DimensionProviderRatio modusCell", + "String Concat modusCell" + ], + { + "title_aux": "Preset Dimensions" + } + ], + "https://github.com/mpiquero7164/ComfyUI-SaveImgPrompt": [ + [ + "Save IMG Prompt" + ], + { + "title_aux": "SaveImgPrompt" + } + ], + "https://github.com/nagolinc/ComfyUI_FastVAEDecorder_SDXL": [ + [ + "FastLatentToImage" + ], + { + "title_aux": "ComfyUI_FastVAEDecorder_SDXL" + } + ], + "https://github.com/natto-maki/ComfyUI-NegiTools": [ + [ + "NegiTools_CompositeImages", + "NegiTools_DepthEstimationByMarigold", + "NegiTools_DetectFaceRotationForInpainting", + "NegiTools_ImageProperties", + "NegiTools_LatentProperties", + "NegiTools_NoiseImageGenerator", + "NegiTools_OpenAiDalle3", + "NegiTools_OpenAiGpt", + "NegiTools_OpenAiGpt4v", + "NegiTools_OpenAiTranslate", + "NegiTools_OpenPoseToPointList", + "NegiTools_PointListToMask", + "NegiTools_RandomImageLoader", + "NegiTools_SaveImageToDirectory", + "NegiTools_SeedGenerator", + "NegiTools_StereoImageGenerator", + "NegiTools_StringFunction" + ], + { + "title_aux": "ComfyUI-NegiTools" + } + ], + "https://github.com/nicolai256/comfyUI_Nodes_nicolai256/raw/main/yugioh-presets.py": [ + [ + "yugioh_Presets" + ], + { + "title_aux": "comfyUI_Nodes_nicolai256" + } + ], + "https://github.com/ningxiaoxiao/comfyui-NDI": [ + [ + "NDI_LoadImage", + "NDI_SendImage" + ], + { + "title_aux": "comfyui-NDI" + } + ], + "https://github.com/nkchocoai/ComfyUI-PromptUtilities": [ + [ + "PromptUtilitiesConstString", + "PromptUtilitiesConstStringMultiLine", + "PromptUtilitiesFormatString", + "PromptUtilitiesJoinStringList", + "PromptUtilitiesLoadPreset", + "PromptUtilitiesLoadPresetAdvanced", + "PromptUtilitiesRandomPreset", + "PromptUtilitiesRandomPresetAdvanced" + ], + { + "title_aux": "ComfyUI-PromptUtilities" + } + ], + "https://github.com/nkchocoai/ComfyUI-SizeFromPresets": [ + [ + "EmptyLatentImageFromPresetsSD15", + "EmptyLatentImageFromPresetsSDXL", + "RandomEmptyLatentImageFromPresetsSD15", + "RandomEmptyLatentImageFromPresetsSDXL", + "RandomSizeFromPresetsSD15", + "RandomSizeFromPresetsSDXL", + "SizeFromPresetsSD15", + "SizeFromPresetsSDXL" + ], + { + "title_aux": "ComfyUI-SizeFromPresets" + } + ], + "https://github.com/nkchocoai/ComfyUI-TextOnSegs": [ + [ + "CalcMaxFontSize", + "ExtractDominantColor", + "GetComplementaryColor", + "SegsToRegion", + "TextOnSegsFloodFill" + ], + { + "title_aux": "ComfyUI-TextOnSegs" + } + ], + "https://github.com/noembryo/ComfyUI-noEmbryo": [ + [ + "PromptTermList1", + "PromptTermList2", + "PromptTermList3", + "PromptTermList4", + "PromptTermList5", + "PromptTermList6" + ], + { + "author": "noEmbryo", + "description": "Some useful nodes for ComfyUI", + "nickname": "noEmbryo", + "title": "noEmbryo nodes for ComfyUI", + "title_aux": "noEmbryo nodes" + } + ], + "https://github.com/nosiu/comfyui-instantId-faceswap": [ + [ + "FaceEmbed", + "FaceSwapGenerationInpaint", + "FaceSwapSetupPipeline", + "LCMLora" + ], + { + "title_aux": "ComfyUI InstantID Faceswapper" + } + ], + "https://github.com/noxinias/ComfyUI_NoxinNodes": [ + [ + "NoxinChime", + "NoxinPromptLoad", + "NoxinPromptSave", + "NoxinScaledResolution", + "NoxinSimpleMath", + "NoxinSplitPrompt" + ], + { + "title_aux": "ComfyUI_NoxinNodes" + } + ], + "https://github.com/ntc-ai/ComfyUI-DARE-LoRA-Merge": [ + [ + "Apply LoRA", + "DARE Merge LoRA Stack", + "Save LoRA" + ], + { + "title_aux": "ComfyUI - Apply LoRA Stacker with DARE" + } + ], + "https://github.com/ntdviet/comfyui-ext/raw/main/custom_nodes/gcLatentTunnel/gcLatentTunnel.py": [ + [ + "gcLatentTunnel" + ], + { + "title_aux": "ntdviet/comfyui-ext" + } + ], + "https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92": [ + [ + "CLIPStringEncode _O", + "Chat completion _O", + "ChatGPT Simple _O", + "ChatGPT _O", + "ChatGPT compact _O", + "Chat_Completion _O", + "Chat_Message _O", + "Chat_Message_fromString _O", + "Concat Text _O", + "ConcatRandomNSP_O", + "Debug String _O", + "Debug Text _O", + "Debug Text route _O", + "Edit_image _O", + "Equation1param _O", + "Equation2params _O", + "GetImage_(Width&Height) _O", + "GetLatent_(Width&Height) _O", + "ImageScaleFactor _O", + "ImageScaleFactorSimple _O", + "LatentUpscaleFactor _O", + "LatentUpscaleFactorSimple _O", + "LatentUpscaleMultiply", + "Note _O", + "RandomNSP _O", + "Replace Text _O", + "String _O", + "Text _O", + "Text2Image _O", + "Trim Text _O", + "VAEDecodeParallel _O", + "combine_chat_messages _O", + "compine_chat_messages _O", + "concat Strings _O", + "create image _O", + "create_image _O", + "debug Completeion _O", + "debug messages_O", + "float _O", + "floatToInt _O", + "floatToText _O", + "int _O", + "intToFloat _O", + "load_openAI _O", + "replace String _O", + "replace String advanced _O", + "saveTextToFile _O", + "seed _O", + "selectLatentFromBatch _O", + "string2Image _O", + "trim String _O", + "variation_image _O" + ], + { + "title_aux": "Quality of life Suit:V2" + } + ], + "https://github.com/ostris/ostris_nodes_comfyui": [ + [ + "LLM Pipe Loader - Ostris", + "LLM Prompt Upsampling - Ostris", + "One Seed - Ostris", + "Text Box - Ostris" + ], + { + "nodename_pattern": "- Ostris$", + "title_aux": "Ostris Nodes ComfyUI" + } + ], + "https://github.com/ownimage/ComfyUI-ownimage": [ + [ + "Caching Image Loader" + ], + { + "title_aux": "ComfyUI-ownimage" + } + ], + "https://github.com/oyvindg/ComfyUI-TrollSuite": [ + [ + "BinaryImageMask", + "ImagePadding", + "LoadLastImage", + "RandomMask", + "TransparentImage" + ], + { + "title_aux": "ComfyUI-TrollSuite" + } + ], + "https://github.com/palant/extended-saveimage-comfyui": [ + [ + "SaveImageExtended" + ], + { + "title_aux": "Extended Save Image for ComfyUI" + } + ], + "https://github.com/palant/image-resize-comfyui": [ + [ + "ImageResize" + ], + { + "title_aux": "Image Resize for ComfyUI" + } + ], + "https://github.com/pants007/comfy-pants": [ + [ + "CLIPTextEncodeAIO", + "Image Make Square" + ], + { + "title_aux": "pants" + } + ], + "https://github.com/paulo-coronado/comfy_clip_blip_node": [ + [ + "CLIPTextEncodeBLIP", + "CLIPTextEncodeBLIP-2", + "Example" + ], + { + "title_aux": "comfy_clip_blip_node" + } + ], + "https://github.com/picturesonpictures/comfy_PoP": [ + [ + "AdaptiveCannyDetector_PoP", + "AnyAspectRatio", + "ConditioningMultiplier_PoP", + "ConditioningNormalizer_PoP", + "DallE3_PoP", + "LoadImageResizer_PoP", + "LoraStackLoader10_PoP", + "LoraStackLoader_PoP", + "VAEDecoderPoP", + "VAEEncoderPoP" + ], + { + "title_aux": "comfy_PoP" + } + ], + "https://github.com/pkpkTech/ComfyUI-SaveAVIF": [ + [ + "SaveAvif" + ], + { + "title_aux": "ComfyUI-SaveAVIF" + } + ], + "https://github.com/pkpkTech/ComfyUI-TemporaryLoader": [ + [ + "LoadTempCheckpoint", + "LoadTempLoRA", + "LoadTempMultiLoRA" + ], + { + "title_aux": "ComfyUI-TemporaryLoader" + } + ], + "https://github.com/pythongosssss/ComfyUI-Custom-Scripts": [ + [ + "CheckpointLoader|pysssss", + "ConstrainImageforVideo|pysssss", + "ConstrainImage|pysssss", + "LoadText|pysssss", + "LoraLoader|pysssss", + "MathExpression|pysssss", + "MultiPrimitive|pysssss", + "PlaySound|pysssss", + "Repeater|pysssss", + "ReroutePrimitive|pysssss", + "SaveText|pysssss", + "ShowText|pysssss", + "StringFunction|pysssss" + ], + { + "title_aux": "pythongosssss/ComfyUI-Custom-Scripts" + } + ], + "https://github.com/pythongosssss/ComfyUI-WD14-Tagger": [ + [ + "WD14Tagger|pysssss" + ], + { + "title_aux": "ComfyUI WD 1.4 Tagger" + } + ], + "https://github.com/ramyma/A8R8_ComfyUI_nodes": [ + [ + "Base64ImageInput", + "Base64ImageOutput" + ], + { + "title_aux": "A8R8 ComfyUI Nodes" + } + ], + "https://github.com/rcfcu2000/zhihuige-nodes-comfyui": [ + [ + "Combine ZHGMasks", + "Cover ZHGMasks", + "From ZHG pip", + "GroundingDinoModelLoader (zhihuige)", + "GroundingDinoPIPESegment (zhihuige)", + "GroundingDinoSAMSegment (zhihuige)", + "InvertMask (zhihuige)", + "SAMModelLoader (zhihuige)", + "To ZHG pip", + "ZHG FaceIndex", + "ZHG GetMaskArea", + "ZHG Image Levels", + "ZHG SaveImage", + "ZHG SmoothEdge", + "ZHG UltimateSDUpscale" + ], + { + "title_aux": "zhihuige-nodes-comfyui" + } + ], + "https://github.com/rcsaquino/comfyui-custom-nodes": [ + [ + "BackgroundRemover | rcsaquino", + "VAELoader | rcsaquino", + "VAEProcessor | rcsaquino" + ], + { + "title_aux": "rcsaquino/comfyui-custom-nodes" + } + ], + "https://github.com/receyuki/comfyui-prompt-reader-node": [ + [ + "SDBatchLoader", + "SDLoraLoader", + "SDLoraSelector", + "SDParameterExtractor", + "SDParameterGenerator", + "SDPromptMerger", + "SDPromptReader", + "SDPromptSaver", + "SDTypeConverter" + ], + { + "author": "receyuki", + "description": "ComfyUI node version of the SD Prompt Reader", + "nickname": "SD Prompt Reader", + "title": "SD Prompt Reader", + "title_aux": "comfyui-prompt-reader-node" + } + ], + "https://github.com/redhottensors/ComfyUI-Prediction": [ + [ + "AvoidErasePrediction", + "CFGPrediction", + "CombinePredictions", + "ConditionedPrediction", + "PerpNegPrediction", + "SamplerCustomPrediction", + "ScalePrediction", + "ScaledGuidancePrediction" + ], + { + "author": "RedHotTensors", + "description": "Fully customizable Classifer Free Guidance for ComfyUI", + "nickname": "ComfyUI-Prediction", + "title": "ComfyUI-Prediction", + "title_aux": "ComfyUI-Prediction" + } + ], + "https://github.com/rgthree/rgthree-comfy": [ + [], + { + "author": "rgthree", + "description": "A bunch of nodes I created that I also find useful.", + "nickname": "rgthree", + "nodename_pattern": " \\(rgthree\\)$", + "title": "Comfy Nodes", + "title_aux": "rgthree's ComfyUI Nodes" + } + ], + "https://github.com/richinsley/Comfy-LFO": [ + [ + "LFO_Pulse", + "LFO_Sawtooth", + "LFO_Sine", + "LFO_Square", + "LFO_Triangle" + ], + { + "title_aux": "Comfy-LFO" + } + ], + "https://github.com/ricklove/comfyui-ricklove": [ + [ + "RL_Crop_Resize", + "RL_Crop_Resize_Batch", + "RL_Depth16", + "RL_Finetune_Analyze", + "RL_Finetune_Analyze_Batch", + "RL_Finetune_Variable", + "RL_Image_Shadow", + "RL_Image_Threshold_Channels", + "RL_Internet_Search", + "RL_LoadImageSequence", + "RL_Optical_Flow_Dip", + "RL_SaveImageSequence", + "RL_Uncrop", + "RL_Warp_Image", + "RL_Zoe_Depth_Map_Preprocessor", + "RL_Zoe_Depth_Map_Preprocessor_Raw_Infer", + "RL_Zoe_Depth_Map_Preprocessor_Raw_Process" + ], + { + "title_aux": "comfyui-ricklove" + } + ], + "https://github.com/rklaffehn/rk-comfy-nodes": [ + [ + "RK_CivitAIAddHashes", + "RK_CivitAIMetaChecker" + ], + { + "title_aux": "rk-comfy-nodes" + } + ], + "https://github.com/romeobuilderotti/ComfyUI-PNG-Metadata": [ + [ + "SetMetadataAll", + "SetMetadataString" + ], + { + "title_aux": "ComfyUI PNG Metadata" + } + ], + "https://github.com/rui40000/RUI-Nodes": [ + [ + "ABCondition", + "CharacterCount" + ], + { + "title_aux": "RUI-Nodes" + } + ], + "https://github.com/s1dlx/comfy_meh/raw/main/meh.py": [ + [ + "MergingExecutionHelper" + ], + { + "title_aux": "comfy_meh" + } + ], + "https://github.com/seanlynch/comfyui-optical-flow": [ + [ + "Apply optical flow", + "Compute optical flow", + "Visualize optical flow" + ], + { + "title_aux": "ComfyUI Optical Flow" + } + ], + "https://github.com/seanlynch/srl-nodes": [ + [ + "SRL Conditional Interrrupt", + "SRL Eval", + "SRL Filter Image List", + "SRL Format String" + ], + { + "title_aux": "SRL's nodes" + } + ], + "https://github.com/sergekatzmann/ComfyUI_Nimbus-Pack": [ + [ + "ImageResizeAndCropNode", + "ImageSquareAdapterNode" + ], + { + "title_aux": "ComfyUI_Nimbus-Pack" + } + ], + "https://github.com/shadowcz007/comfyui-consistency-decoder": [ + [ + "VAEDecodeConsistencyDecoder", + "VAELoaderConsistencyDecoder" + ], + { + "title_aux": "Consistency Decoder" + } + ], + "https://github.com/shadowcz007/comfyui-mixlab-nodes": [ + [ + "3DImage", + "AppInfo", + "AreaToMask", + "CenterImage", + "CharacterInText", + "ChatGPTOpenAI", + "CkptNames_", + "Color", + "DynamicDelayProcessor", + "EmbeddingPrompt", + "EnhanceImage", + "FaceToMask", + "FeatheredMask", + "FloatSlider", + "FloatingVideo", + "Font", + "GamePal", + "GetImageSize_", + "GradientImage", + "GridOutput", + "ImageColorTransfer", + "ImageCropByAlpha", + "IntNumber", + "JoinWithDelimiter", + "LaMaInpainting", + "LimitNumber", + "LoadImagesFromPath", + "LoadImagesFromURL", + "LoraNames_", + "MergeLayers", + "MirroredImage", + "MultiplicationNode", + "NewLayer", + "NoiseImage", + "OutlineMask", + "PromptImage", + "PromptSimplification", + "PromptSlide", + "RandomPrompt", + "ResizeImageMixlab", + "SamplerNames_", + "SaveImageToLocal", + "ScreenShare", + "Seed_", + "ShowLayer", + "ShowTextForGPT", + "SmoothMask", + "SpeechRecognition", + "SpeechSynthesis", + "SplitImage", + "SplitLongMask", + "SvgImage", + "SwitchByIndex", + "TESTNODE_", + "TESTNODE_TOKEN", + "TextImage", + "TextInput_", + "TextToNumber", + "TransparentImage", + "VAEDecodeConsistencyDecoder", + "VAELoaderConsistencyDecoder" + ], + { + "title_aux": "comfyui-mixlab-nodes" + } + ], + "https://github.com/shadowcz007/comfyui-ultralytics-yolo": [ + [ + "DetectByLabel" + ], + { + "title_aux": "comfyui-ultralytics-yolo" + } + ], + "https://github.com/shiimizu/ComfyUI-PhotoMaker-Plus": [ + [ + "PhotoMakerEncodePlus", + "PhotoMakerStyles", + "PrepImagesForClipVisionFromPath" + ], + { + "title_aux": "ComfyUI PhotoMaker Plus" + } + ], + "https://github.com/shiimizu/ComfyUI-TiledDiffusion": [ + [ + "NoiseInversion", + "TiledDiffusion", + "VAEDecodeTiled_TiledDiffusion", + "VAEEncodeTiled_TiledDiffusion" + ], + { + "title_aux": "Tiled Diffusion & VAE for ComfyUI" + } + ], + "https://github.com/shiimizu/ComfyUI_smZNodes": [ + [ + "smZ CLIPTextEncode", + "smZ Settings" + ], + { + "title_aux": "smZNodes" + } + ], + "https://github.com/shingo1228/ComfyUI-SDXL-EmptyLatentImage": [ + [ + "SDXL Empty Latent Image" + ], + { + "title_aux": "ComfyUI-SDXL-EmptyLatentImage" + } + ], + "https://github.com/shingo1228/ComfyUI-send-eagle-slim": [ + [ + "Send Eagle with text", + "Send Webp Image to Eagle" + ], + { + "title_aux": "ComfyUI-send-Eagle(slim)" + } + ], + "https://github.com/shockz0rz/ComfyUI_InterpolateEverything": [ + [ + "OpenposePreprocessorInterpolate" + ], + { + "title_aux": "InterpolateEverything" + } + ], + "https://github.com/shockz0rz/comfy-easy-grids": [ + [ + "FloatToText", + "GridFloatList", + "GridFloats", + "GridIntList", + "GridInts", + "GridLoras", + "GridStringList", + "GridStrings", + "ImageGridCommander", + "IntToText", + "SaveImageGrid", + "TextConcatenator" + ], + { + "title_aux": "comfy-easy-grids" + } + ], + "https://github.com/siliconflow/onediff_comfy_nodes": [ + [ + "CompareModel", + "ControlNetGraphLoader", + "ControlNetGraphSaver", + "ControlNetSpeedup", + "ModelGraphLoader", + "ModelGraphSaver", + "ModelSpeedup", + "ModuleDeepCacheSpeedup", + "OneDiffCheckpointLoaderSimple", + "SVDSpeedup", + "ShowImageDiff", + "VaeGraphLoader", + "VaeGraphSaver", + "VaeSpeedup" + ], + { + "title_aux": "OneDiff Nodes" + } + ], + "https://github.com/sipherxyz/comfyui-art-venture": [ + [ + "AV_CheckpointMerge", + "AV_CheckpointModelsToParametersPipe", + "AV_CheckpointSave", + "AV_ControlNetEfficientLoader", + "AV_ControlNetEfficientLoaderAdvanced", + "AV_ControlNetEfficientStacker", + "AV_ControlNetEfficientStackerSimple", + "AV_ControlNetLoader", + "AV_ControlNetPreprocessor", + "AV_LoraListLoader", + "AV_LoraListStacker", + "AV_LoraLoader", + "AV_ParametersPipeToCheckpointModels", + "AV_ParametersPipeToPrompts", + "AV_PromptsToParametersPipe", + "AV_SAMLoader", + "AV_VAELoader", + "AspectRatioSelector", + "BLIPCaption", + "BLIPLoader", + "BooleanPrimitive", + "ColorBlend", + "ColorCorrect", + "DeepDanbooruCaption", + "DependenciesEdit", + "Fooocus_KSampler", + "Fooocus_KSamplerAdvanced", + "GetBoolFromJson", + "GetFloatFromJson", + "GetIntFromJson", + "GetObjectFromJson", + "GetSAMEmbedding", + "GetTextFromJson", + "ISNetLoader", + "ISNetSegment", + "ImageAlphaComposite", + "ImageApplyChannel", + "ImageExtractChannel", + "ImageGaussianBlur", + "ImageMuxer", + "ImageRepeat", + "ImageScaleDown", + "ImageScaleDownBy", + "ImageScaleDownToSize", + "ImageScaleToMegapixels", + "LaMaInpaint", + "LoadImageAsMaskFromUrl", + "LoadImageFromUrl", + "LoadJsonFromUrl", + "MergeModels", + "NumberScaler", + "OverlayInpaintedImage", + "OverlayInpaintedLatent", + "PrepareImageAndMaskForInpaint", + "QRCodeGenerator", + "RandomFloat", + "RandomInt", + "SAMEmbeddingToImage", + "SDXLAspectRatioSelector", + "SDXLPromptStyler", + "SeedSelector", + "StringToInt", + "StringToNumber" + ], + { + "title_aux": "comfyui-art-venture" + } + ], + "https://github.com/skfoo/ComfyUI-Coziness": [ + [ + "LoraTextExtractor-b1f83aa2", + "MultiLoraLoader-70bf3d77" + ], + { + "title_aux": "ComfyUI-Coziness" + } + ], + "https://github.com/smagnetize/kb-comfyui-nodes": [ + [ + "SingleImageDataUrlLoader" + ], + { + "title_aux": "kb-comfyui-nodes" + } + ], + "https://github.com/space-nuko/ComfyUI-Disco-Diffusion": [ + [ + "DiscoDiffusion_DiscoDiffusion", + "DiscoDiffusion_DiscoDiffusionExtraSettings", + "DiscoDiffusion_GuidedDiffusionLoader", + "DiscoDiffusion_OpenAICLIPLoader" + ], + { + "title_aux": "Disco Diffusion" + } + ], + "https://github.com/space-nuko/ComfyUI-OpenPose-Editor": [ + [ + "Nui.OpenPoseEditor" + ], + { + "title_aux": "OpenPose Editor" + } + ], + "https://github.com/space-nuko/nui-suite": [ + [ + "Nui.DynamicPromptsTextGen", + "Nui.FeelingLuckyTextGen", + "Nui.OutputString" + ], + { + "title_aux": "nui suite" + } + ], + "https://github.com/spacepxl/ComfyUI-HQ-Image-Save": [ + [ + "LoadEXR", + "LoadLatentEXR", + "SaveEXR", + "SaveLatentEXR", + "SaveTiff" + ], + { + "title_aux": "ComfyUI-HQ-Image-Save" + } + ], + "https://github.com/spacepxl/ComfyUI-Image-Filters": [ + [ + "AdainImage", + "AdainLatent", + "AlphaClean", + "AlphaMatte", + "BatchAlign", + "BatchAverageImage", + "BatchAverageUnJittered", + "BatchNormalizeImage", + "BatchNormalizeLatent", + "BlurImageFast", + "BlurMaskFast", + "ClampOutliers", + "ConvertNormals", + "DifferenceChecker", + "DilateErodeMask", + "EnhanceDetail", + "ExposureAdjust", + "GuidedFilterAlpha", + "ImageConstant", + "ImageConstantHSV", + "JitterImage", + "Keyer", + "LatentStats", + "NormalMapSimple", + "OffsetLatentImage", + "RemapRange", + "Tonemap", + "UnJitterImage", + "UnTonemap" + ], + { + "title_aux": "ComfyUI-Image-Filters" + } + ], + "https://github.com/spacepxl/ComfyUI-RAVE": [ + [ + "ConditioningDebug", + "ImageGridCompose", + "ImageGridDecompose", + "KSamplerRAVE", + "LatentGridCompose", + "LatentGridDecompose" + ], + { + "title_aux": "ComfyUI-RAVE" + } + ], + "https://github.com/spinagon/ComfyUI-seam-carving": [ + [ + "SeamCarving" + ], + { + "title_aux": "ComfyUI-seam-carving" + } + ], + "https://github.com/spinagon/ComfyUI-seamless-tiling": [ + [ + "CircularVAEDecode", + "MakeCircularVAE", + "OffsetImage", + "SeamlessTile" + ], + { + "title_aux": "Seamless tiling Node for ComfyUI" + } + ], + "https://github.com/spro/comfyui-mirror": [ + [ + "LatentMirror" + ], + { + "title_aux": "Latent Mirror node for ComfyUI" + } + ], + "https://github.com/ssitu/ComfyUI_UltimateSDUpscale": [ + [ + "UltimateSDUpscale", + "UltimateSDUpscaleNoUpscale" + ], + { + "title_aux": "UltimateSDUpscale" + } + ], + "https://github.com/ssitu/ComfyUI_fabric": [ + [ + "FABRICPatchModel", + "FABRICPatchModelAdv", + "KSamplerAdvFABRICAdv", + "KSamplerFABRIC", + "KSamplerFABRICAdv" + ], + { + "title_aux": "ComfyUI fabric" + } + ], + "https://github.com/ssitu/ComfyUI_restart_sampling": [ + [ + "KRestartSampler", + "KRestartSamplerAdv", + "KRestartSamplerSimple" + ], + { + "title_aux": "Restart Sampling" + } + ], + "https://github.com/ssitu/ComfyUI_roop": [ + [ + "RoopImproved", + "roop" + ], + { + "title_aux": "ComfyUI roop" + } + ], + "https://github.com/storyicon/comfyui_segment_anything": [ + [ + "GroundingDinoModelLoader (segment anything)", + "GroundingDinoSAMSegment (segment anything)", + "InvertMask (segment anything)", + "IsMaskEmpty", + "SAMModelLoader (segment anything)" + ], + { + "title_aux": "segment anything" + } + ], + "https://github.com/strimmlarn/ComfyUI_Strimmlarns_aesthetic_score": [ + [ + "AesthetlcScoreSorter", + "CalculateAestheticScore", + "LoadAesteticModel", + "ScoreToNumber" + ], + { + "title_aux": "ComfyUI_Strimmlarns_aesthetic_score" + } + ], + "https://github.com/styler00dollar/ComfyUI-deepcache": [ + [ + "DeepCache" + ], + { + "title_aux": "ComfyUI-deepcache" + } + ], + "https://github.com/styler00dollar/ComfyUI-sudo-latent-upscale": [ + [ + "SudoLatentUpscale" + ], + { + "title_aux": "ComfyUI-sudo-latent-upscale" + } + ], + "https://github.com/syllebra/bilbox-comfyui": [ + [ + "BilboXLut", + "BilboXPhotoPrompt", + "BilboXVignette" + ], + { + "title_aux": "BilboX's ComfyUI Custom Nodes" + } + ], + "https://github.com/sylym/comfy_vid2vid": [ + [ + "CheckpointLoaderSimpleSequence", + "DdimInversionSequence", + "KSamplerSequence", + "LoadImageMaskSequence", + "LoadImageSequence", + "LoraLoaderSequence", + "SetLatentNoiseSequence", + "TrainUnetSequence", + "VAEEncodeForInpaintSequence" + ], + { + "title_aux": "Vid2vid" + } + ], + "https://github.com/szhublox/ambw_comfyui": [ + [ + "Auto Merge Block Weighted", + "CLIPMergeSimple", + "CheckpointSave", + "ModelMergeBlocks", + "ModelMergeSimple" + ], + { + "title_aux": "Auto-MBW" + } + ], + "https://github.com/taabata/Comfy_Syrian_Falcon_Nodes/raw/main/SyrianFalconNodes.py": [ + [ + "CompositeImage", + "KSamplerAlternate", + "KSamplerPromptEdit", + "KSamplerPromptEditAndAlternate", + "LoopBack", + "QRGenerate", + "WordAsImage" + ], + { + "title_aux": "Syrian Falcon Nodes" + } + ], + "https://github.com/taabata/LCM_Inpaint-Outpaint_Comfy": [ + [ + "ComfyNodesToSaveCanvas", + "FloatNumber", + "FreeU_LCM", + "ImageOutputToComfyNodes", + "ImageShuffle", + "ImageSwitch", + "LCMGenerate", + "LCMGenerate_ReferenceOnly", + "LCMGenerate_SDTurbo", + "LCMGenerate_img2img", + "LCMGenerate_img2img_IPAdapter", + "LCMGenerate_img2img_controlnet", + "LCMGenerate_inpaintv2", + "LCMGenerate_inpaintv3", + "LCMLoader", + "LCMLoader_RefInpaint", + "LCMLoader_ReferenceOnly", + "LCMLoader_SDTurbo", + "LCMLoader_controlnet", + "LCMLoader_controlnet_inpaint", + "LCMLoader_img2img", + "LCMLoraLoader_inpaint", + "LCMLoraLoader_ipadapter", + "LCMLora_inpaint", + "LCMLora_ipadapter", + "LCMT2IAdapter", + "LCM_IPAdapter", + "LCM_IPAdapter_inpaint", + "LCM_outpaint_prep", + "LoadImageNode_LCM", + "Loader_SegmindVega", + "OutpaintCanvasTool", + "SaveImage_Canvas", + "SaveImage_LCM", + "SaveImage_Puzzle", + "SaveImage_PuzzleV2", + "SegmindVega", + "SettingsSwitch", + "stitch" + ], + { + "title_aux": "LCM_Inpaint-Outpaint_Comfy" + } + ], + "https://github.com/talesofai/comfyui-browser": [ + [ + "DifyTextGenerator //Browser", + "LoadImageByUrl //Browser", + "SelectInputs //Browser", + "UploadToRemote //Browser", + "XyzPlot //Browser" + ], + { + "title_aux": "ComfyUI Browser" + } + ], + "https://github.com/theUpsider/ComfyUI-Logic": [ + [ + "Bool", + "Compare", + "DebugPrint", + "Float", + "If ANY execute A else B", + "Int", + "String" + ], + { + "title_aux": "ComfyUI-Logic" + } + ], + "https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader": [ + [ + "Load Styles CSV" + ], + { + "title_aux": "Styles CSV Loader Extension for ComfyUI" + } + ], + "https://github.com/thecooltechguy/ComfyUI-MagicAnimate": [ + [ + "MagicAnimate", + "MagicAnimateModelLoader" + ], + { + "title_aux": "ComfyUI-MagicAnimate" + } + ], + "https://github.com/thecooltechguy/ComfyUI-Stable-Video-Diffusion": [ + [ + "SVDDecoder", + "SVDModelLoader", + "SVDSampler", + "SVDSimpleImg2Vid" + ], + { + "title_aux": "ComfyUI Stable Video Diffusion" + } + ], + "https://github.com/thedyze/save-image-extended-comfyui": [ + [ + "SaveImageExtended" + ], + { + "title_aux": "Save Image Extended for ComfyUI" + } + ], + "https://github.com/tocubed/ComfyUI-AudioReactor": [ + [ + "AudioFrameTransformBeats", + "AudioFrameTransformShadertoy", + "AudioLoadPath", + "Shadertoy" + ], + { + "title_aux": "ComfyUI-AudioReactor" + } + ], + "https://github.com/toyxyz/ComfyUI_toyxyz_test_nodes": [ + [ + "CaptureWebcam", + "LatentDelay", + "LoadWebcamImage", + "SaveImagetoPath" + ], + { + "title_aux": "ComfyUI_toyxyz_test_nodes" + } + ], + "https://github.com/trojblue/trNodes": [ + [ + "JpgConvertNode", + "trColorCorrection", + "trLayering", + "trRouter", + "trRouterLonger" + ], + { + "title_aux": "trNodes" + } + ], + "https://github.com/trumanwong/ComfyUI-NSFW-Detection": [ + [ + "NSFWDetection" + ], + { + "title_aux": "ComfyUI-NSFW-Detection" + } + ], + "https://github.com/ttulttul/ComfyUI-Iterative-Mixer": [ + [ + "Batch Unsampler", + "Iterative Mixing KSampler", + "Iterative Mixing KSampler Advanced", + "IterativeMixingSampler", + "IterativeMixingScheduler", + "IterativeMixingSchedulerAdvanced", + "Latent Batch Comparison Plot", + "Latent Batch Statistics Plot", + "MixingMaskGenerator" + ], + { + "title_aux": "ComfyUI Iterative Mixing Nodes" + } + ], + "https://github.com/ttulttul/ComfyUI-Tensor-Operations": [ + [ + "Image Match Normalize", + "Latent Match Normalize" + ], + { + "title_aux": "ComfyUI-Tensor-Operations" + } + ], + "https://github.com/tudal/Hakkun-ComfyUI-nodes/raw/main/hakkun_nodes.py": [ + [ + "Any Converter", + "Calculate Upscale", + "Image Resize To Height", + "Image Resize To Width", + "Image size to string", + "Load Random Image", + "Load Text", + "Multi Text Merge", + "Prompt Parser", + "Random Line", + "Random Line 4" + ], + { + "title_aux": "Hakkun-ComfyUI-nodes" + } + ], + "https://github.com/tusharbhutt/Endless-Nodes": [ + [ + "ESS Aesthetic Scoring", + "ESS Aesthetic Scoring Auto", + "ESS Combo Parameterizer", + "ESS Combo Parameterizer & Prompts", + "ESS Eight Input Random", + "ESS Eight Input Text Switch", + "ESS Float to Integer", + "ESS Float to Number", + "ESS Float to String", + "ESS Float to X", + "ESS Global Envoy", + "ESS Image Reward", + "ESS Image Reward Auto", + "ESS Image Saver with JSON", + "ESS Integer to Float", + "ESS Integer to Number", + "ESS Integer to String", + "ESS Integer to X", + "ESS Number to Float", + "ESS Number to Integer", + "ESS Number to String", + "ESS Number to X", + "ESS Parameterizer", + "ESS Parameterizer & Prompts", + "ESS Six Float Output", + "ESS Six Input Random", + "ESS Six Input Text Switch", + "ESS Six Integer IO Switch", + "ESS Six Integer IO Widget", + "ESS String to Float", + "ESS String to Integer", + "ESS String to Num", + "ESS String to X", + "\u267e\ufe0f\ud83c\udf0a\u2728 Image Saver with JSON" + ], + { + "author": "BiffMunky", + "description": "A small set of nodes I created for various numerical and text inputs. Features image saver with ability to have JSON saved to separate folder, parameter collection nodes, two aesthetic scoring models, switches for text and numbers, and conversion of string to numeric and vice versa.", + "nickname": "\u267e\ufe0f\ud83c\udf0a\u2728", + "title": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes", + "title_aux": "Endless \ufe0f\ud83c\udf0a\u2728 Nodes" + } + ], + "https://github.com/twri/sdxl_prompt_styler": [ + [ + "SDXLPromptStyler", + "SDXLPromptStylerAdvanced" + ], + { + "title_aux": "SDXL Prompt Styler" + } + ], + "https://github.com/uarefans/ComfyUI-Fans": [ + [ + "Fans Prompt Styler Negative", + "Fans Prompt Styler Positive", + "Fans Styler", + "Fans Text Concatenate" + ], + { + "title_aux": "ComfyUI-Fans" + } + ], + "https://github.com/vanillacode314/SimpleWildcardsComfyUI": [ + [ + "SimpleConcat", + "SimpleWildcard" + ], + { + "author": "VanillaCode314", + "description": "A simple wildcard node for ComfyUI. Can also be used a style prompt node.", + "nickname": "Simple Wildcard", + "title": "Simple Wildcard", + "title_aux": "Simple Wildcard" + } + ], + "https://github.com/vienteck/ComfyUI-Chat-GPT-Integration": [ + [ + "ChatGptPrompt" + ], + { + "title_aux": "ComfyUI-Chat-GPT-Integration" + } + ], + "https://github.com/violet-chen/comfyui-psd2png": [ + [ + "Psd2Png" + ], + { + "title_aux": "comfyui-psd2png" + } + ], + "https://github.com/wallish77/wlsh_nodes": [ + [ + "Alternating KSampler (WLSH)", + "Build Filename String (WLSH)", + "CLIP +/- w/Text Unified (WLSH)", + "CLIP Positive-Negative (WLSH)", + "CLIP Positive-Negative XL (WLSH)", + "CLIP Positive-Negative XL w/Text (WLSH)", + "CLIP Positive-Negative w/Text (WLSH)", + "Checkpoint Loader w/Name (WLSH)", + "Empty Latent by Pixels (WLSH)", + "Empty Latent by Ratio (WLSH)", + "Empty Latent by Size (WLSH)", + "Generate Border Mask (WLSH)", + "Grayscale Image (WLSH)", + "Image Load with Metadata (WLSH)", + "Image Save with Prompt (WLSH)", + "Image Save with Prompt File (WLSH)", + "Image Save with Prompt/Info (WLSH)", + "Image Save with Prompt/Info File (WLSH)", + "Image Scale By Factor (WLSH)", + "Image Scale by Shortside (WLSH)", + "KSamplerAdvanced (WLSH)", + "Multiply Integer (WLSH)", + "Outpaint to Image (WLSH)", + "Prompt Weight (WLSH)", + "Quick Resolution Multiply (WLSH)", + "Resolutions by Ratio (WLSH)", + "SDXL Quick Empty Latent (WLSH)", + "SDXL Quick Image Scale (WLSH)", + "SDXL Resolutions (WLSH)", + "SDXL Steps (WLSH)", + "Save Positive Prompt(WLSH)", + "Save Prompt (WLSH)", + "Save Prompt/Info (WLSH)", + "Seed and Int (WLSH)", + "Seed to Number (WLSH)", + "Simple Pattern Replace (WLSH)", + "Simple String Combine (WLSH)", + "Time String (WLSH)", + "Upscale by Factor with Model (WLSH)", + "VAE Encode for Inpaint w/Padding (WLSH)" + ], + { + "title_aux": "wlsh_nodes" + } + ], + "https://github.com/whatbirdisthat/cyberdolphin": [ + [ + "\ud83d\udc2c Gradio ChatInterface", + "\ud83d\udc2c OpenAI Advanced", + "\ud83d\udc2c OpenAI Compatible", + "\ud83d\udc2c OpenAI DALL\u00b7E", + "\ud83d\udc2c OpenAI Simple" + ], + { + "title_aux": "cyberdolphin" + } + ], + "https://github.com/whmc76/ComfyUI-Openpose-Editor-Plus": [ + [ + "CDL.OpenPoseEditorPlus" + ], + { + "title_aux": "ComfyUI-Openpose-Editor-Plus" + } + ], + "https://github.com/wmatson/easy-comfy-nodes": [ + [ + "EZAssocDictNode", + "EZAssocImgNode", + "EZAssocStrNode", + "EZEmptyDictNode", + "EZHttpPostNode", + "EZLoadImgBatchFromUrlsNode", + "EZLoadImgFromUrlNode", + "EZRemoveImgBackground", + "EZS3Uploader", + "EZVideoCombiner" + ], + { + "title_aux": "easy-comfy-nodes" + } + ], + "https://github.com/wolfden/ComfyUi_PromptStylers": [ + [ + "SDXLPromptStylerAll", + "SDXLPromptStylerHorror", + "SDXLPromptStylerMisc", + "SDXLPromptStylerbyArtist", + "SDXLPromptStylerbyCamera", + "SDXLPromptStylerbyComposition", + "SDXLPromptStylerbyCyberpunkSurrealism", + "SDXLPromptStylerbyDepth", + "SDXLPromptStylerbyEnvironment", + "SDXLPromptStylerbyFantasySetting", + "SDXLPromptStylerbyFilter", + "SDXLPromptStylerbyFocus", + "SDXLPromptStylerbyImpressionism", + "SDXLPromptStylerbyLighting", + "SDXLPromptStylerbyMileHigh", + "SDXLPromptStylerbyMood", + "SDXLPromptStylerbyMythicalCreature", + "SDXLPromptStylerbyOriginal", + "SDXLPromptStylerbyQuantumRealism", + "SDXLPromptStylerbySteamPunkRealism", + "SDXLPromptStylerbySubject", + "SDXLPromptStylerbySurrealism", + "SDXLPromptStylerbyTheme", + "SDXLPromptStylerbyTimeofDay", + "SDXLPromptStylerbyWyvern", + "SDXLPromptbyCelticArt", + "SDXLPromptbyContemporaryNordicArt", + "SDXLPromptbyFashionArt", + "SDXLPromptbyGothicRevival", + "SDXLPromptbyIrishFolkArt", + "SDXLPromptbyRomanticNationalismArt", + "SDXLPromptbySportsArt", + "SDXLPromptbyStreetArt", + "SDXLPromptbyVikingArt", + "SDXLPromptbyWildlifeArt" + ], + { + "title_aux": "SDXL Prompt Styler (customized version by wolfden)" + } + ], + "https://github.com/wolfden/ComfyUi_String_Function_Tree": [ + [ + "StringFunction" + ], + { + "title_aux": "ComfyUi_String_Function_Tree" + } + ], + "https://github.com/wsippel/comfyui_ws/raw/main/sdxl_utility.py": [ + [ + "SDXLResolutionPresets" + ], + { + "title_aux": "SDXLResolutionPresets" + } + ], + "https://github.com/wutipong/ComfyUI-TextUtils": [ + [ + "Text Utils - Join N-Elements of String List", + "Text Utils - Join String List", + "Text Utils - Join Strings", + "Text Utils - Split String to List" + ], + { + "title_aux": "ComfyUI-TextUtils" + } + ], + "https://github.com/wwwins/ComfyUI-Simple-Aspect-Ratio": [ + [ + "SimpleAspectRatio" + ], + { + "title_aux": "ComfyUI-Simple-Aspect-Ratio" + } + ], + "https://github.com/xXAdonesXx/NodeGPT": [ + [ + "AppendAgent", + "Assistant", + "Chat", + "ChatGPT", + "CombineInput", + "Conditioning", + "CostumeAgent_1", + "CostumeAgent_2", + "CostumeMaster_1", + "Critic", + "DisplayString", + "DisplayTextAsImage", + "EVAL", + "Engineer", + "Executor", + "GroupChat", + "Image_generation_Conditioning", + "LM_Studio", + "LoadAPIconfig", + "LoadTXT", + "MemGPT", + "Memory_Excel", + "Model_1", + "Ollama", + "Output2String", + "Planner", + "Scientist", + "TextCombine", + "TextGeneration", + "TextGenerator", + "TextInput", + "TextOutput", + "UserProxy", + "llama-cpp", + "llava", + "oobaboogaOpenAI" + ], + { + "title_aux": "NodeGPT" + } + ], + "https://github.com/xiaoxiaodesha/hd_node": [ + [ + "Combine HDMasks", + "Cover HDMasks", + "HD FaceIndex", + "HD GetMaskArea", + "HD Image Levels", + "HD SmoothEdge", + "HD UltimateSDUpscale" + ], + { + "title_aux": "hd-nodes-comfyui" + } + ], + "https://github.com/yffyhk/comfyui_auto_danbooru": [ + [ + "GetDanbooru", + "TagEncode" + ], + { + "title_aux": "comfyui_auto_danbooru" + } + ], + "https://github.com/yolain/ComfyUI-Easy-Use": [ + [ + "dynamicThresholdingFull", + "easy LLLiteLoader", + "easy XYInputs: CFG Scale", + "easy XYInputs: Checkpoint", + "easy XYInputs: ControlNet", + "easy XYInputs: Denoise", + "easy XYInputs: Lora", + "easy XYInputs: ModelMergeBlocks", + "easy XYInputs: NegativeCond", + "easy XYInputs: NegativeCondList", + "easy XYInputs: PositiveCond", + "easy XYInputs: PositiveCondList", + "easy XYInputs: PromptSR", + "easy XYInputs: Sampler/Scheduler", + "easy XYInputs: Seeds++ Batch", + "easy XYInputs: Steps", + "easy XYPlot", + "easy XYPlotAdvanced", + "easy a1111Loader", + "easy boolean", + "easy cascadeLoader", + "easy cleanGpuUsed", + "easy comfyLoader", + "easy compare", + "easy controlnetLoader", + "easy controlnetLoaderADV", + "easy convertAnything", + "easy detailerFix", + "easy float", + "easy fooocusInpaintLoader", + "easy fullLoader", + "easy fullkSampler", + "easy globalSeed", + "easy hiresFix", + "easy if", + "easy imageInsetCrop", + "easy imagePixelPerfect", + "easy imageRemoveBG", + "easy imageSave", + "easy imageScaleDown", + "easy imageScaleDownBy", + "easy imageScaleDownToSize", + "easy imageSize", + "easy imageSizeByLongerSide", + "easy imageSizeBySide", + "easy imageSwitch", + "easy imageToMask", + "easy int", + "easy isSDXL", + "easy joinImageBatch", + "easy kSampler", + "easy kSamplerDownscaleUnet", + "easy kSamplerInpainting", + "easy kSamplerSDTurbo", + "easy kSamplerTiled", + "easy latentCompositeMaskedWithCond", + "easy latentNoisy", + "easy loraStack", + "easy negative", + "easy pipeIn", + "easy pipeOut", + "easy pipeToBasicPipe", + "easy portraitMaster", + "easy poseEditor", + "easy positive", + "easy preDetailerFix", + "easy preSampling", + "easy preSamplingAdvanced", + "easy preSamplingCascade", + "easy preSamplingDynamicCFG", + "easy preSamplingSdTurbo", + "easy promptList", + "easy rangeFloat", + "easy rangeInt", + "easy samLoaderPipe", + "easy seed", + "easy showAnything", + "easy showLoaderSettingsNames", + "easy showSpentTime", + "easy string", + "easy stylesSelector", + "easy svdLoader", + "easy ultralyticsDetectorPipe", + "easy unSampler", + "easy wildcards", + "easy xyAny", + "easy zero123Loader" + ], + { + "title_aux": "ComfyUI Easy Use" + } + ], + "https://github.com/yolanother/DTAIComfyImageSubmit": [ + [ + "DTSimpleSubmitImage", + "DTSubmitImage" + ], + { + "title_aux": "Comfy AI DoubTech.ai Image Sumission Node" + } + ], + "https://github.com/yolanother/DTAIComfyLoaders": [ + [ + "DTCLIPLoader", + "DTCLIPVisionLoader", + "DTCheckpointLoader", + "DTCheckpointLoaderSimple", + "DTControlNetLoader", + "DTDiffControlNetLoader", + "DTDiffusersLoader", + "DTGLIGENLoader", + "DTLoadImage", + "DTLoadImageMask", + "DTLoadLatent", + "DTLoraLoader", + "DTLorasLoader", + "DTStyleModelLoader", + "DTUpscaleModelLoader", + "DTVAELoader", + "DTunCLIPCheckpointLoader" + ], + { + "title_aux": "Comfy UI Online Loaders" + } + ], + "https://github.com/yolanother/DTAIComfyPromptAgent": [ + [ + "DTPromptAgent", + "DTPromptAgentString" + ], + { + "title_aux": "Comfy UI Prompt Agent" + } + ], + "https://github.com/yolanother/DTAIComfyQRCodes": [ + [ + "QRCode" + ], + { + "title_aux": "Comfy UI QR Codes" + } + ], + "https://github.com/yolanother/DTAIComfyVariables": [ + [ + "DTCLIPTextEncode", + "DTSingleLineStringVariable", + "DTSingleLineStringVariableNoClip", + "FloatVariable", + "IntVariable", + "StringFormat", + "StringFormatSingleLine", + "StringVariable" + ], + { + "title_aux": "Variables for Comfy UI" + } + ], + "https://github.com/yolanother/DTAIImageToTextNode": [ + [ + "DTAIImageToTextNode", + "DTAIImageUrlToTextNode" + ], + { + "title_aux": "Image to Text Node" + } + ], + "https://github.com/youyegit/tdxh_node_comfyui": [ + [ + "TdxhBoolNumber", + "TdxhClipVison", + "TdxhControlNetApply", + "TdxhControlNetProcessor", + "TdxhFloatInput", + "TdxhImageToSize", + "TdxhImageToSizeAdvanced", + "TdxhImg2ImgLatent", + "TdxhIntInput", + "TdxhLoraLoader", + "TdxhOnOrOff", + "TdxhReference", + "TdxhStringInput", + "TdxhStringInputTranslator" + ], + { + "title_aux": "tdxh_node_comfyui" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Pronodes": [ + [ + "LoadYoutubeVideoNode" + ], + { + "title_aux": "ComfyUI-Pronodes" + } + ], + "https://github.com/yuvraj108c/ComfyUI-Whisper": [ + [ + "Add Subtitles To Background", + "Add Subtitles To Frames", + "Apply Whisper", + "Resize Cropped Subtitles" + ], + { + "title_aux": "ComfyUI Whisper" + } + ], + "https://github.com/zcfrank1st/Comfyui-Toolbox": [ + [ + "PreviewJson", + "PreviewVideo", + "SaveJson", + "TestJsonPreview" + ], + { + "title_aux": "Comfyui-Toolbox" + } + ], + "https://github.com/zcfrank1st/Comfyui-Yolov8": [ + [ + "Yolov8Detection", + "Yolov8Segmentation" + ], + { + "title_aux": "ComfyUI Yolov8" + } + ], + "https://github.com/zcfrank1st/comfyui_visual_anagrams": [ + [ + "VisualAnagramsAnimate", + "VisualAnagramsSample" + ], + { + "title_aux": "comfyui_visual_anagram" + } + ], + "https://github.com/zer0TF/cute-comfy": [ + [ + "Cute.Placeholder" + ], + { + "title_aux": "Cute Comfy" + } + ], + "https://github.com/zfkun/ComfyUI_zfkun": [ + [ + "ZFLoadImagePath", + "ZFPreviewText", + "ZFPreviewTextMultiline", + "ZFShareScreen", + "ZFTextTranslation" + ], + { + "title_aux": "ComfyUI_zfkun" + } + ], + "https://github.com/zhongpei/ComfyUI-InstructIR": [ + [ + "InstructIRProcess", + "LoadInstructIRModel" + ], + { + "title_aux": "ComfyUI for InstructIR" + } + ], + "https://github.com/zhongpei/Comfyui_image2prompt": [ + [ + "Image2Text", + "LoadImage2TextModel" + ], + { + "title_aux": "Comfyui_image2prompt" + } + ], + "https://github.com/zhuanqianfish/ComfyUI-EasyNode": [ + [ + "EasyCaptureNode", + "EasyVideoOutputNode", + "SendImageWebSocket" + ], + { + "title_aux": "EasyCaptureNode for ComfyUI" + } + ], + "https://raw.githubusercontent.com/throttlekitty/SDXLCustomAspectRatio/main/SDXLAspectRatio.py": [ + [ + "SDXLAspectRatio" + ], + { + "title_aux": "SDXLCustomAspectRatio" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/new/model-list.json b/custom_nodes/ComfyUI-Manager/node_db/new/model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..dfabc35751ae67c9bc76b8316bc0b6d8c1313d33 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/new/model-list.json @@ -0,0 +1,819 @@ +{ + "models": [ + { + "name": "stabilityai/Stable Cascade: effnet_encoder.safetensors (VAE)", + "type": "VAE", + "base": "Stable Cascade", + "save_path": "vae/Stable-Cascade", + "description": "[81.5MB] Stable Cascade: effnet_encoder.\nVAE encoder for stage_c latent.", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "effnet_encoder.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/effnet_encoder.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_a.safetensors (VAE)", + "type": "VAE", + "base": "Stable Cascade", + "save_path": "vae/Stable-Cascade", + "description": "[73.7MB] Stable Cascade: stage_a", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_a.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_a.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[6.25GB] Stable Cascade: stage_b", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b_bf16.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[3.13GB] Stable Cascade: stage_b/bf16", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[2.8GB] Stable Cascade: stage_b/lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b_lite.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_lite.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_b_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[1.4GB] Stable Cascade: stage_b/bf16,lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_b_lite_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_b_lite_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[14.4GB] Stable Cascade: stage_c", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c_bf16.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[7.18GB] Stable Cascade: stage_c/bf16", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[4.12GB] Stable Cascade: stage_c/lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c_lite.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_lite.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: stage_c_lite.safetensors (UNET)", + "type": "unet", + "base": "Stable Cascade", + "save_path": "unet/Stable-Cascade", + "description": "[2.06GB] Stable Cascade: stage_c/bf16,lite", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "stage_c_lite_bf16.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_lite_bf16.safetensors" + }, + { + "name": "stabilityai/Stable Cascade: text_encoder (CLIP)", + "type": "clip", + "base": "Stable Cascade", + "save_path": "clip/Stable-Cascade", + "description": "[1.39GB] Stable Cascade: text_encoder", + "reference": "https://huggingface.co/stabilityai/stable-cascade", + "filename": "model.safetensors", + "url": "https://huggingface.co/stabilityai/stable-cascade/resolve/main/text_encoder/model.safetensors" + }, + + { + "name": "1k3d68.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 1k3d68.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "1k3d68.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/1k3d68.onnx" + }, + { + "name": "2d106det.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 2d106det.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "2d106det.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/2d106det.onnx" + }, + { + "name": "genderage.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 genderage.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "genderage.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/genderage.onnx" + }, + { + "name": "glintr100.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 glintr100.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "glintr100.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/glintr100.onnx" + }, + { + "name": "scrfd_10g_bnkps.onnx", + "type": "insightface", + "base": "inswapper", + "save_path": "insightface/models/antelopev2", + "description": "Antelopev2 scrfd_10g_bnkps.onnx model for InstantId. (InstantId needs all Antelopev2 models)", + "reference": "https://github.com/cubiq/ComfyUI_InstantID#installation", + "filename": "scrfd_10g_bnkps.onnx", + "url": "https://huggingface.co/MonsterMMORPG/tools/resolve/main/scrfd_10g_bnkps.onnx" + }, + + { + "name": "photomaker-v1.bin", + "type": "photomaker", + "base": "SDXL", + "save_path": "photomaker", + "description": "PhotoMaker model. This model is compatible with SDXL.", + "reference": "https://huggingface.co/TencentARC/PhotoMaker", + "filename": "photomaker-v1.bin", + "url": "https://huggingface.co/TencentARC/PhotoMaker/resolve/main/photomaker-v1.bin" + }, + { + "name": "ip-adapter-faceid_sdxl.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sdxl.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sdxl.bin" + }, + { + "name": "ip-adapter-faceid-plusv2_sdxl.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Plus Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sdxl.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sdxl.bin" + }, + { + "name": "ip-adapter-faceid_sdxl_lora.safetensors", + "type": "lora", + "base": "SDXL", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID LoRA Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sdxl_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sdxl_lora.safetensors" + }, + { + "name": "ip-adapter-faceid-plusv2_sdxl_lora.safetensors", + "type": "lora", + "base": "SDXL", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID-Plus V2 LoRA Model (SDXL) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sdxl_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sdxl_lora.safetensors" + }, + + { + "name": "TencentARC/motionctrl.pth", + "type": "checkpoints", + "base": "MotionCtrl", + "save_path": "checkpoints/motionctrl", + "description": "To use the ComfyUI-MotionCtrl extension, downloading this model is required.", + "reference": "https://huggingface.co/TencentARC/MotionCtrl", + "filename": "motionctrl.pth", + "url": "https://huggingface.co/TencentARC/MotionCtrl/resolve/main/motionctrl.pth" + }, + { + "name": "ip-adapter-faceid-plusv2_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID-Plus V2 Model (SD1.5)", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sd15.bin" + }, + { + "name": "ip-adapter-faceid-plusv2_sd15_lora.safetensors", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID-Plus V2 LoRA Model (SD1.5)", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plusv2_sd15_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plusv2_sd15_lora.safetensors" + }, + { + "name": "ip-adapter-faceid-plus_sd15_lora.safetensors", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID Plus LoRA Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plus_sd15_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plus_sd15_lora.safetensors" + }, + + { + "name": "ControlNet-HandRefiner-pruned (inpaint-depth-hand; fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "This inpaint-depth controlnet model is specialized for the hand refiner.", + "reference": "https://huggingface.co/hr16/ControlNet-HandRefiner-pruned", + "filename": "control_sd15_inpaint_depth_hand_fp16.safetensors", + "url": "https://huggingface.co/hr16/ControlNet-HandRefiner-pruned/resolve/main/control_sd15_inpaint_depth_hand_fp16.safetensors" + }, + { + "name": "stabilityai/stable-diffusion-x4-upscaler", + "type": "checkpoints", + "base": "upscale", + "save_path": "checkpoints/upscale", + "description": "[3.53GB] This upscaling model is a latent text-guided diffusion model and should be used with SD_4XUpscale_Conditioning and KSampler.", + "reference": "https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler", + "filename": "x4-upscaler-ema.safetensors", + "url": "https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.safetensors" + }, + { + "name": "LDSR(Latent Diffusion Super Resolution)", + "type": "upscale", + "base": "upscale", + "save_path": "upscale_models/ldsr", + "description": "LDSR upscale model. Through the [a/ComfyUI-Flowty-LDSR](https://github.com/flowtyone/ComfyUI-Flowty-LDSR) extension, the upscale model can be utilized.", + "reference": "https://github.com/CompVis/latent-diffusion", + "filename": "last.ckpt", + "url": "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" + }, + { + "name": "control_boxdepth_LooseControlfp16 (fp16)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "default", + "description": "Loose ControlNet model", + "reference": "https://huggingface.co/ioclab/LooseControl_WebUICombine", + "filename": "control_boxdepth_LooseControlfp16.safetensors", + "url": "https://huggingface.co/ioclab/LooseControl_WebUICombine/resolve/main/control_boxdepth_LooseControlfp16.safetensors" + }, + + { + "name": "ip-adapter-faceid-portrait_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Portrait Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-portrait_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-portrait_sd15.bin" + }, + { + "name": "ip-adapter-faceid-plus_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Plus Model (SD1.5) [ipadapter]", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid-plus_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid-plus_sd15.bin" + }, + { + "name": "ip-adapter-faceid_sd15.bin", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "IP-Adapter-FaceID Model (SD1.5)", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sd15.bin", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sd15.bin" + }, + { + "name": "ip-adapter-faceid_sd15_lora.safetensors", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/ipadapter", + "description": "IP-Adapter-FaceID LoRA Model (SD1.5)", + "reference": "https://huggingface.co/h94/IP-Adapter-FaceID", + "filename": "ip-adapter-faceid_sd15_lora.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter-FaceID/resolve/main/ip-adapter-faceid_sd15_lora.safetensors" + }, + + { + "name": "LongAnimatediff/lt_long_mm_16_64_frames_v1.1.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "filename": "lt_long_mm_16_64_frames_v1.1.ckpt", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_16_64_frames_v1.1.ckpt" + }, + + { + "name": "animatediff/v3_sd15_sparsectrl_rgb.ckpt (ComfyUI-AnimateDiff-Evolved)", + "type": "controlnet", + "base": "SD1.x", + "save_path": "controlnet/SD1.5/animatediff", + "description": "AnimateDiff SparseCtrl RGB ControlNet model", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_sparsectrl_rgb.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_sparsectrl_rgb.ckpt" + }, + { + "name": "animatediff/v3_sd15_sparsectrl_scribble.ckpt", + "type": "controlnet", + "base": "SD1.x", + "save_path": "controlnet/SD1.5/animatediff", + "description": "AnimateDiff SparseCtrl Scribble ControlNet model", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_sparsectrl_scribble.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_sparsectrl_scribble.ckpt" + }, + { + "name": "animatediff/v3_sd15_mm.ckpt (ComfyUI-AnimateDiff-Evolved)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "custom_nodes/ComfyUI-AnimateDiff-Evolved/models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_mm.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_mm.ckpt" + }, + { + "name": "animatediff/v3_sd15_adapter.ckpt", + "type": "lora", + "base": "SD1.x", + "save_path": "loras/SD1.5/animatediff", + "description": "AnimateDiff Adapter LoRA (SD1.5)", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v3_sd15_adapter.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v3_sd15_adapter.ckpt" + }, + + { + "name": "Segmind-Vega", + "type": "checkpoints", + "base": "segmind-vega", + "save_path": "checkpoints/segmind-vega", + "description": "The Segmind-Vega Model is a distilled version of the Stable Diffusion XL (SDXL), offering a remarkable 70% reduction in size and an impressive 100% speedup while retaining high-quality text-to-image generation capabilities.", + "reference": "https://huggingface.co/segmind/Segmind-Vega", + "filename": "segmind-vega.safetensors", + "url": "https://huggingface.co/segmind/Segmind-Vega/resolve/main/segmind-vega.safetensors" + }, + { + "name": "Segmind-VegaRT - Latent Consistency Model (LCM) LoRA of Segmind-Vega", + "type": "lora", + "base": "segmind-vega", + "save_path": "loras/segmind-vega", + "description": "Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.", + "reference": "https://huggingface.co/segmind/Segmind-VegaRT", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/segmind/Segmind-VegaRT/resolve/main/pytorch_lora_weights.safetensors" + }, + + { + "name": "stabilityai/Stable Zero123", + "type": "zero123", + "base": "zero123", + "save_path": "checkpoints/zero123", + "description": "Stable Zero123 is a model for view-conditioned image generation based on [a/Zero123](https://github.com/cvlab-columbia/zero123).", + "reference": "https://huggingface.co/stabilityai/stable-zero123", + "filename": "stable_zero123.ckpt", + "url": "https://huggingface.co/stabilityai/stable-zero123/resolve/main/stable_zero123.ckpt" + }, + { + "name": "LongAnimatediff/lt_long_mm_32_frames.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "filename": "lt_long_mm_32_frames.ckpt", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_32_frames.ckpt" + }, + { + "name": "LongAnimatediff/lt_long_mm_16_64_frames.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/Lightricks/LongAnimateDiff", + "filename": "lt_long_mm_16_64_frames.ckpt", + "url": "https://huggingface.co/Lightricks/LongAnimateDiff/resolve/main/lt_long_mm_16_64_frames.ckpt" + }, + { + "name": "ip-adapter_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15.safetensors" + }, + { + "name": "ip-adapter_sd15_light.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sd15_light.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15_light.safetensors" + }, + { + "name": "ip-adapter_sd15_vit-G.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sd15_vit-G.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter_sd15_vit-G.safetensors" + }, + { + "name": "ip-adapter-plus_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus_sd15.safetensors" + }, + { + "name": "ip-adapter-plus-face_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus-face_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus-face_sd15.safetensors" + }, + { + "name": "ip-adapter-full-face_sd15.safetensors", + "type": "IP-Adapter", + "base": "SD1.5", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-full-face_sd15.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-full-face_sd15.safetensors" + }, + { + "name": "ip-adapter_sdxl.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "You can use this model in the [a/ComfyUI IPAdapter plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus) extension.", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sdxl.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.safetensors" + }, + { + "name": "ip-adapter_sdxl_vit-h.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter_sdxl_vit-h.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl_vit-h.safetensors" + }, + { + "name": "ip-adapter-plus_sdxl_vit-h.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus_sdxl_vit-h.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus_sdxl_vit-h.safetensors" + }, + { + "name": "ip-adapter-plus-face_sdxl_vit-h.safetensors", + "type": "IP-Adapter", + "base": "SDXL", + "save_path": "ipadapter", + "description": "This model requires the use of the SD1.5 encoder despite being for SDXL checkpoints", + "reference": "https://huggingface.co/h94/IP-Adapter", + "filename": "ip-adapter-plus-face_sdxl_vit-h.safetensors", + "url": "https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter-plus-face_sdxl_vit-h.safetensors" + }, + + { + "name": "SDXL-Turbo 1.0 (fp16)", + "type": "checkpoints", + "base": "SDXL", + "save_path": "checkpoints/SDXL-TURBO", + "description": "[6.9GB] SDXL-Turbo 1.0 fp16", + "reference": "https://huggingface.co/stabilityai/sdxl-turbo", + "filename": "sd_xl_turbo_1.0_fp16.safetensors", + "url": "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors" + }, + { + "name": "SDXL-Turbo 1.0", + "type": "checkpoints", + "base": "SDXL", + "save_path": "checkpoints/SDXL-TURBO", + "description": "[13.9GB] SDXL-Turbo 1.0", + "reference": "https://huggingface.co/stabilityai/sdxl-turbo", + "filename": "sd_xl_turbo_1.0.safetensors", + "url": "https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0.safetensors" + }, + { + "name": "Stable Video Diffusion Image-to-Video", + "type": "checkpoints", + "base": "SVD", + "save_path": "checkpoints/SVD", + "description": "Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.\nNOTE: 14 frames @ 576x1024", + "reference": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid", + "filename": "svd.safetensors", + "url": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/resolve/main/svd.safetensors" + }, + { + "name": "Stable Video Diffusion Image-to-Video (XT)", + "type": "checkpoints", + "base": "SVD", + "save_path": "checkpoints/SVD", + "description": "Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.\nNOTE: 25 frames @ 576x1024 ", + "reference": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt", + "filename": "svd_xt.safetensors", + "url": "https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors" + }, + + { + "name": "animatediff/mm_sdxl_v10_beta.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SDXL", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sdxl_v10_beta.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sdxl_v10_beta.ckpt" + }, + { + "name": "animatediff/v2_lora_PanLeft.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_PanLeft.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_PanLeft.ckpt" + }, + { + "name": "animatediff/v2_lora_PanRight.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_PanRight.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_PanRight.ckpt" + }, + { + "name": "animatediff/v2_lora_RollingAnticlockwise.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_RollingAnticlockwise.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_RollingAnticlockwise.ckpt" + }, + { + "name": "animatediff/v2_lora_RollingClockwise.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_RollingClockwise.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_RollingClockwise.ckpt" + }, + { + "name": "animatediff/v2_lora_TiltDown.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_TiltDown.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltDown.ckpt" + }, + { + "name": "animatediff/v2_lora_TiltUp.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_TiltUp.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltUp.ckpt" + }, + { + "name": "animatediff/v2_lora_ZoomIn.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_ZoomIn.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_ZoomIn.ckpt" + }, + { + "name": "animatediff/v2_lora_ZoomOut.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "motion lora", + "base": "SD1.x", + "save_path": "animatediff_motion_lora", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "v2_lora_ZoomOut.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_ZoomOut.ckpt" + }, + + { + "name": "CiaraRowles/TemporalNet1XL (1.0)", + "type": "controlnet", + "base": "SD1.5", + "save_path": "controlnet/TemporalNet1XL", + "description": "This is TemporalNet1XL, it is a re-train of the controlnet TemporalNet1 with Stable Diffusion XL.", + "reference": "https://huggingface.co/CiaraRowles/controlnet-temporalnet-sdxl-1.0", + "filename": "diffusion_pytorch_model.safetensors", + "url": "https://huggingface.co/CiaraRowles/controlnet-temporalnet-sdxl-1.0/resolve/main/diffusion_pytorch_model.safetensors" + }, + + { + "name": "LCM LoRA SD1.5", + "type": "lora", + "base": "SD1.5", + "save_path": "loras/lcm/SD1.5", + "description": "Latent Consistency LoRA for SD1.5", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-sdv1-5", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "name": "LCM LoRA SSD-1B", + "type": "lora", + "base": "SSD-1B", + "save_path": "loras/lcm/SSD-1B", + "description": "Latent Consistency LoRA for SSD-1B", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-ssd-1b", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/latent-consistency/lcm-lora-ssd-1b/resolve/main/pytorch_lora_weights.safetensors" + }, + { + "name": "LCM LoRA SDXL", + "type": "lora", + "base": "SSD-1B", + "save_path": "loras/lcm/SDXL", + "description": "Latent Consistency LoRA for SDXL", + "reference": "https://huggingface.co/latent-consistency/lcm-lora-sdxl", + "filename": "pytorch_lora_weights.safetensors", + "url": "https://huggingface.co/latent-consistency/lcm-lora-sdxl/resolve/main/pytorch_lora_weights.safetensors" + }, + + { + "name": "face_yolov8m-seg_60.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "face_yolov8m-seg_60.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/face_yolov8m-seg_60.pt" + }, + { + "name": "face_yolov8n-seg2_60.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "face_yolov8n-seg2_60.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/face_yolov8n-seg2_60.pt" + }, + { + "name": "hair_yolov8n-seg_60.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "hair_yolov8n-seg_60.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/hair_yolov8n-seg_60.pt" + }, + { + "name": "skin_yolov8m-seg_400.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "skin_yolov8m-seg_400.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8m-seg_400.pt" + }, + { + "name": "skin_yolov8n-seg_400.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "skin_yolov8n-seg_400.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8n-seg_400.pt" + }, + { + "name": "skin_yolov8n-seg_800.pt (segm)", + "type": "Ultralytics", + "base": "Ultralytics", + "save_path": "ultralytics/segm", + "description": "These are the available models in the UltralyticsDetectorProvider of Impact Pack.", + "reference": "https://github.com/hben35096/assets/releases/tag/yolo8", + "filename": "skin_yolov8n-seg_800.pt", + "url": "https://github.com/hben35096/assets/releases/download/yolo8/skin_yolov8n-seg_800.pt" + }, + + { + "name": "CiaraRowles/temporaldiff-v1-animatediff.ckpt (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/CiaraRowles/TemporalDiff", + "filename": "temporaldiff-v1-animatediff.ckpt", + "url": "https://huggingface.co/CiaraRowles/TemporalDiff/resolve/main/temporaldiff-v1-animatediff.ckpt" + }, + { + "name": "animatediff/mm_sd_v15_v2.ckpt (ComfyUI-AnimateDiff-Evolved)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "custom_nodes/ComfyUI-AnimateDiff-Evolved/models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node. (Note: Requires ComfyUI-Manager V0.24 or above)", + "reference": "https://huggingface.co/guoyww/animatediff", + "filename": "mm_sd_v15_v2.ckpt", + "url": "https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt" + }, + { + "name": "AD_Stabilized_Motion/mm-Stabilized_high.pth (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/manshoety/AD_Stabilized_Motion", + "filename": "mm-Stabilized_high.pth", + "url": "https://huggingface.co/manshoety/AD_Stabilized_Motion/resolve/main/mm-Stabilized_high.pth" + }, + { + "name": "AD_Stabilized_Motion/mm-Stabilized_mid.pth (ComfyUI-AnimateDiff-Evolved) (Updated path)", + "type": "animatediff", + "base": "SD1.x", + "save_path": "animatediff_models", + "description": "Pressing 'install' directly downloads the model from the Kosinkadink/ComfyUI-AnimateDiff-Evolved extension node.", + "reference": "https://huggingface.co/manshoety/AD_Stabilized_Motion", + "filename": "mm-Stabilized_mid.pth", + "url": "https://huggingface.co/manshoety/AD_Stabilized_Motion/resolve/main/mm-Stabilized_mid.pth" + } + ] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/tutorial/custom-node-list.json b/custom_nodes/ComfyUI-Manager/node_db/tutorial/custom-node-list.json new file mode 100644 index 0000000000000000000000000000000000000000..d191d7159a8aa4476bb5a4e46372d1e978ba453c --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/tutorial/custom-node-list.json @@ -0,0 +1,124 @@ +{ + "custom_nodes": [ + { + "author": "Suzie1", + "title": "Guide To Making Custom Nodes in ComfyUI", + "reference": "https://github.com/Suzie1/ComfyUI_Guide_To_Making_Custom_Nodes", + "files": [ + "https://github.com/Suzie1/ComfyUI_Guide_To_Making_Custom_Nodes" + ], + "install_type": "git-clone", + "description": "There is a small node pack attached to this guide. This includes the init file and 3 nodes associated with the tutorials." + }, + { + "author": "dynamixar", + "title": "Atluris", + "reference": "https://github.com/dynamixar/Atluris", + "files": [ + "https://github.com/dynamixar/Atluris" + ], + "install_type": "git-clone", + "description": "Nodes:Random Line" + }, + { + "author": "et118", + "title": "ComfyUI-ElGogh-Nodes", + "reference": "https://github.com/et118/ComfyUI-ElGogh-Nodes", + "files": [ + "https://github.com/et118/ComfyUI-ElGogh-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:ElGogh Positive Prompt, ElGogh NEGATIVE Prompt, ElGogh Empty Latent Image, ElGogh Checkpoint Loader Simple" + }, + { + "author": "LarryJane491", + "title": "Custom-Node-Base", + "reference": "https://github.com/LarryJane491/Custom-Node-Base", + "files": [ + "https://github.com/LarryJane491/Custom-Node-Base" + ], + "install_type": "git-clone", + "description": "This project is an `empty` custom node that is already in its own folder. It serves as a base to build any custom node. Whenever you want to create a custom node, you can download that, put it in custom_nodes, then you just have to change the names and fill it with code!" + }, + { + "author": "foxtrot-roger", + "title": "comfyui-custom-nodes", + "reference": "https://github.com/foxtrot-roger/comfyui-custom-nodes", + "files": [ + "https://github.com/foxtrot-roger/comfyui-custom-nodes" + ], + "install_type": "git-clone", + "description": "Tutorial nodes" + }, + { + "author": "GraftingRayman", + "title": "ComfyUI-Trajectory", + "reference": "https://github.com/GraftingRayman/ComfyUI-Trajectory", + "files": [ + "https://github.com/GraftingRayman/ComfyUI-Trajectory" + ], + "install_type": "git-clone", + "description": "Nodes:GR Trajectory" + }, + { + "author": "wailovet", + "title": "ComfyUI-WW", + "reference": "https://github.com/wailovet/ComfyUI-WW", + "files": [ + "https://github.com/wailovet/ComfyUI-WW" + ], + "install_type": "git-clone", + "description": "Nodes:WW_ImageResize" + }, + { + "author": "bmz55", + "title": "bmz nodes", + "reference": "https://github.com/bmz55/comfyui-bmz-nodes", + "files": [ + "https://github.com/bmz55/comfyui-bmz-nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Load Images From Dir With Name (Inspire - BMZ), Count Images In Dir (BMZ), Get Level Text (BMZ), Get Level Float (BMZ)" + }, + { + "author": "azure-dragon-ai", + "title": "ComfyUI-HPSv2-Nodes", + "reference": "https://github.com/azure-dragon-ai/ComfyUI-HPSv2-Nodes", + "files": [ + "https://github.com/azure-dragon-ai/ComfyUI-HPSv2-Nodes" + ], + "install_type": "git-clone", + "description": "Nodes:Loader, Image Processor, Text Processor, ImageScore" + }, + { + "author": "kappa54m", + "title": "ComfyUI-HPSv2-Nodes", + "reference": "https://github.com/kappa54m/ComfyUI_Usability", + "files": [ + "https://github.com/kappa54m/ComfyUI_Usability" + ], + "install_type": "git-clone", + "description": "Nodes:Load Image Dedup" + }, + { + "author": "IvanRybakov", + "title": "comfyui-node-int-to-string-convertor", + "reference": "https://github.com/IvanRybakov/comfyui-node-int-to-string-convertor", + "files": [ + "https://github.com/IvanRybakov/comfyui-node-int-to-string-convertor" + ], + "install_type": "git-clone", + "description": "Nodes:Int To String Convertor" + }, + { + "author": "yowipr", + "title": "ComfyUI-Manual", + "reference": "https://github.com/yowipr/ComfyUI-Manual", + "files": [ + "https://github.com/yowipr/ComfyUI-Manual" + ], + "install_type": "git-clone", + "description": "Nodes:M_Layer, M_Output" + } + ] +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/tutorial/extension-node-map.json b/custom_nodes/ComfyUI-Manager/node_db/tutorial/extension-node-map.json new file mode 100644 index 0000000000000000000000000000000000000000..9e26dfeeb6e641a33dae4961196235bdb965b21b --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/tutorial/extension-node-map.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/node_db/tutorial/model-list.json b/custom_nodes/ComfyUI-Manager/node_db/tutorial/model-list.json new file mode 100644 index 0000000000000000000000000000000000000000..8e3e1dc4858a08aa46190aa53ba320d565206cf4 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/tutorial/model-list.json @@ -0,0 +1,3 @@ +{ + "models": [] +} diff --git a/custom_nodes/ComfyUI-Manager/node_db/tutorial/scan.sh b/custom_nodes/ComfyUI-Manager/node_db/tutorial/scan.sh new file mode 100755 index 0000000000000000000000000000000000000000..5d8d8c48b6e3f48dc1491738c1226f574909c05d --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/node_db/tutorial/scan.sh @@ -0,0 +1,4 @@ +#!/bin/bash +source ../../../../venv/bin/activate +rm .tmp/*.py > /dev/null +python ../../scanner.py diff --git a/custom_nodes/ComfyUI-Manager/notebooks/comfyui_colab_with_manager.ipynb b/custom_nodes/ComfyUI-Manager/notebooks/comfyui_colab_with_manager.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..36bab4fe83f42901cb136c273806537e6b6baa0d --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/notebooks/comfyui_colab_with_manager.ipynb @@ -0,0 +1,353 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "aaaaaaaaaa" + }, + "source": [ + "Git clone the repo and install the requirements. (ignore the pip errors about protobuf)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "bbbbbbbbbb" + }, + "outputs": [], + "source": [ + "# #@title Environment Setup\n", + "\n", + "from pathlib import Path\n", + "\n", + "OPTIONS = {}\n", + "\n", + "USE_GOOGLE_DRIVE = True #@param {type:\"boolean\"}\n", + "UPDATE_COMFY_UI = True #@param {type:\"boolean\"}\n", + "USE_COMFYUI_MANAGER = True #@param {type:\"boolean\"}\n", + "INSTALL_CUSTOM_NODES_DEPENDENCIES = True #@param {type:\"boolean\"}\n", + "OPTIONS['USE_GOOGLE_DRIVE'] = USE_GOOGLE_DRIVE\n", + "OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI\n", + "OPTIONS['USE_COMFYUI_MANAGER'] = USE_COMFYUI_MANAGER\n", + "OPTIONS['INSTALL_CUSTOM_NODES_DEPENDENCIES'] = INSTALL_CUSTOM_NODES_DEPENDENCIES\n", + "\n", + "current_dir = !pwd\n", + "WORKSPACE = f\"{current_dir[0]}/ComfyUI\"\n", + "\n", + "if OPTIONS['USE_GOOGLE_DRIVE']:\n", + " !echo \"Mounting Google Drive...\"\n", + " %cd /\n", + "\n", + " from google.colab import drive\n", + " drive.mount('/content/drive')\n", + "\n", + " WORKSPACE = \"/content/drive/MyDrive/ComfyUI\"\n", + " %cd /content/drive/MyDrive\n", + "\n", + "![ ! -d $WORKSPACE ] && echo -= Initial setup ComfyUI =- && git clone https://github.com/comfyanonymous/ComfyUI\n", + "%cd $WORKSPACE\n", + "\n", + "if OPTIONS['UPDATE_COMFY_UI']:\n", + " !echo -= Updating ComfyUI =-\n", + " !git pull\n", + "\n", + "!echo -= Install dependencies =-\n", + "#Remove cu121 as it causes issues in Colab.\n", + "#!pip install xformers!=0.0.18 -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 --extra-index-url https://download.pytorch.org/whl/cu118 --extra-index-url https://download.pytorch.org/whl/cu117\n", + "!pip3 install accelerate\n", + "!pip3 install einops transformers>=4.25.1 safetensors>=0.3.0 aiohttp pyyaml Pillow scipy tqdm psutil\n", + "!pip3 install xformers!=0.0.18 torch==2.1.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121\n", + "!pip3 install torchsde\n", + "\n", + "if OPTIONS['USE_COMFYUI_MANAGER']:\n", + " %cd custom_nodes\n", + " ![ ! -d ComfyUI-Manager ] && echo -= Initial setup ComfyUI-Manager =- && git clone https://github.com/ltdrdata/ComfyUI-Manager\n", + " %cd ComfyUI-Manager\n", + " !git pull\n", + "\n", + "%cd $WORKSPACE\n", + "\n", + "if OPTIONS['INSTALL_CUSTOM_NODES_DEPENDENCIES']:\n", + " !pwd\n", + " !echo -= Install custom nodes dependencies =-\n", + " ![ -f \"custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py\" ] && python \"custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py\"\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cccccccccc" + }, + "source": [ + "Download some models/checkpoints/vae or custom comfyui nodes (uncomment the commands for the ones you want)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "dddddddddd" + }, + "outputs": [], + "source": [ + "# Checkpoints\n", + "\n", + "### SDXL\n", + "### I recommend these workflow examples: https://comfyanonymous.github.io/ComfyUI_examples/sdxl/\n", + "\n", + "#!wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors -P ./models/checkpoints/\n", + "#!wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors -P ./models/checkpoints/\n", + "\n", + "# SDXL ReVision\n", + "#!wget -c https://huggingface.co/comfyanonymous/clip_vision_g/resolve/main/clip_vision_g.safetensors -P ./models/clip_vision/\n", + "\n", + "# SD1.5\n", + "!wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/\n", + "\n", + "# SD2\n", + "#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/\n", + "#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/\n", + "\n", + "# Some SD1.5 anime style\n", + "#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors -P ./models/checkpoints/\n", + "#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1_orangemixs.safetensors -P ./models/checkpoints/\n", + "#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3_orangemixs.safetensors -P ./models/checkpoints/\n", + "#!wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors -P ./models/checkpoints/\n", + "\n", + "# Waifu Diffusion 1.5 (anime style SD2.x 768-v)\n", + "#!wget -c https://huggingface.co/waifu-diffusion/wd-1-5-beta3/resolve/main/wd-illusion-fp16.safetensors -P ./models/checkpoints/\n", + "\n", + "\n", + "# unCLIP models\n", + "#!wget -c https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP/resolve/main/illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors -P ./models/checkpoints/\n", + "#!wget -c https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP/resolve/main/wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors -P ./models/checkpoints/\n", + "\n", + "\n", + "# VAE\n", + "!wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -P ./models/vae/\n", + "#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt -P ./models/vae/\n", + "#!wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt -P ./models/vae/\n", + "\n", + "\n", + "# Loras\n", + "#!wget -c https://civitai.com/api/download/models/10350 -O ./models/loras/theovercomer8sContrastFix_sd21768.safetensors #theovercomer8sContrastFix SD2.x 768-v\n", + "#!wget -c https://civitai.com/api/download/models/10638 -O ./models/loras/theovercomer8sContrastFix_sd15.safetensors #theovercomer8sContrastFix SD1.x\n", + "#!wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors -P ./models/loras/ #SDXL offset noise lora\n", + "\n", + "\n", + "# T2I-Adapter\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/controlnet/\n", + "\n", + "# T2I Styles Model\n", + "#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth -P ./models/style_models/\n", + "\n", + "# CLIPVision model (needed for styles model)\n", + "#!wget -c https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin -O ./models/clip_vision/clip_vit14.bin\n", + "\n", + "\n", + "# ControlNet\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_scribble_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_seg_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_softedge_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11u_sd15_tile_fp16.safetensors -P ./models/controlnet/\n", + "\n", + "# ControlNet SDXL\n", + "#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-recolor-rank256.safetensors -P ./models/controlnet/\n", + "#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-sketch-rank256.safetensors -P ./models/controlnet/\n", + "\n", + "# Controlnet Preprocessor nodes by Fannovel16\n", + "#!cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors; cd comfy_controlnet_preprocessors && python install.py\n", + "\n", + "\n", + "# GLIGEN\n", + "#!wget -c https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors/resolve/main/gligen_sd14_textbox_pruned_fp16.safetensors -P ./models/gligen/\n", + "\n", + "\n", + "# ESRGAN upscale model\n", + "#!wget -c https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P ./models/upscale_models/\n", + "#!wget -c https://huggingface.co/sberbank-ai/Real-ESRGAN/resolve/main/RealESRGAN_x2.pth -P ./models/upscale_models/\n", + "#!wget -c https://huggingface.co/sberbank-ai/Real-ESRGAN/resolve/main/RealESRGAN_x4.pth -P ./models/upscale_models/\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kkkkkkkkkkkkkkk" + }, + "source": [ + "### Run ComfyUI with cloudflared (Recommended Way)\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "jjjjjjjjjjjjjj" + }, + "outputs": [], + "source": [ + "!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb\n", + "!dpkg -i cloudflared-linux-amd64.deb\n", + "\n", + "import subprocess\n", + "import threading\n", + "import time\n", + "import socket\n", + "import urllib.request\n", + "\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " print(\"\\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\\n\")\n", + "\n", + " p = subprocess.Popen([\"cloudflared\", \"tunnel\", \"--url\", \"http://127.0.0.1:{}\".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n", + " for line in p.stderr:\n", + " l = line.decode()\n", + " if \"trycloudflare.com \" in l:\n", + " print(\"This is the URL to access ComfyUI:\", l[l.find(\"http\"):], end='')\n", + " #print(l, end='')\n", + "\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n", + "\n", + "!python main.py --dont-print-server" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kkkkkkkkkkkkkk" + }, + "source": [ + "### Run ComfyUI with localtunnel\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "jjjjjjjjjjjjj" + }, + "outputs": [], + "source": [ + "!npm install -g localtunnel\n", + "\n", + "import subprocess\n", + "import threading\n", + "import time\n", + "import socket\n", + "import urllib.request\n", + "\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " print(\"\\nComfyUI finished loading, trying to launch localtunnel (if it gets stuck here localtunnel is having issues)\\n\")\n", + "\n", + " print(\"The password/enpoint ip for localtunnel is:\", urllib.request.urlopen('https://ipv4.icanhazip.com').read().decode('utf8').strip(\"\\n\"))\n", + " p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n", + " for line in p.stdout:\n", + " print(line.decode(), end='')\n", + "\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n", + "\n", + "!python main.py --dont-print-server" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gggggggggg" + }, + "source": [ + "### Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work)\n", + "\n", + "You should see the ui appear in an iframe. If you get a 403 error, it's your firefox settings or an extension that's messing things up.\n", + "\n", + "If you want to open it in another window use the link.\n", + "\n", + "Note that some UI features like live image previews won't work because the colab iframe blocks websockets." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "hhhhhhhhhh" + }, + "outputs": [], + "source": [ + "import threading\n", + "import time\n", + "import socket\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " from google.colab import output\n", + " output.serve_kernel_port_as_iframe(port, height=1024)\n", + " print(\"to open it in a window you can open this link here:\")\n", + " output.serve_kernel_port_as_window(port)\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n", + "\n", + "!python main.py --dont-print-server" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "provenance": [] + }, + "gpuClass": "standard", + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/custom_nodes/ComfyUI-Manager/prestartup_script.py b/custom_nodes/ComfyUI-Manager/prestartup_script.py new file mode 100644 index 0000000000000000000000000000000000000000..31c445e86560ad87f10e7a95bb129890a28d1589 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/prestartup_script.py @@ -0,0 +1,525 @@ +import datetime +import os +import subprocess +import sys +import atexit +import threading +import re +import locale +import platform + + +glob_path = os.path.join(os.path.dirname(__file__), "glob") +sys.path.append(glob_path) + +import cm_global + + +message_collapses = [] +import_failed_extensions = set() +cm_global.variables['cm.on_revision_detected_handler'] = [] +enable_file_logging = True + + +def register_message_collapse(f): + global message_collapses + message_collapses.append(f) + + +def is_import_failed_extension(name): + global import_failed_extensions + return name in import_failed_extensions + + +def check_file_logging(): + global enable_file_logging + try: + import configparser + config_path = os.path.join(os.path.dirname(__file__), "config.ini") + config = configparser.ConfigParser() + config.read(config_path) + default_conf = config['default'] + + if 'file_logging' in default_conf and default_conf['file_logging'].lower() == 'false': + enable_file_logging = False + except Exception: + pass + + +check_file_logging() + + +sys.__comfyui_manager_register_message_collapse = register_message_collapse +sys.__comfyui_manager_is_import_failed_extension = is_import_failed_extension +cm_global.register_api('cm.register_message_collapse', register_message_collapse) +cm_global.register_api('cm.is_import_failed_extension', is_import_failed_extension) + + +comfyui_manager_path = os.path.dirname(__file__) +custom_nodes_path = os.path.abspath(os.path.join(comfyui_manager_path, "..")) +startup_script_path = os.path.join(comfyui_manager_path, "startup-scripts") +restore_snapshot_path = os.path.join(startup_script_path, "restore-snapshot.json") +git_script_path = os.path.join(comfyui_manager_path, "git_helper.py") + +std_log_lock = threading.Lock() + + +class TerminalHook: + def __init__(self): + self.hooks = {} + + def add_hook(self, k, v): + self.hooks[k] = v + + def remove_hook(self, k): + if k in self.hooks: + del self.hooks[k] + + def write_stderr(self, msg): + for v in self.hooks.values(): + try: + v.write_stderr(msg) + except Exception: + pass + + def write_stdout(self, msg): + for v in self.hooks.values(): + try: + v.write_stdout(msg) + except Exception: + pass + + +terminal_hook = TerminalHook() +sys.__comfyui_manager_terminal_hook = terminal_hook + + +def handle_stream(stream, prefix): + stream.reconfigure(encoding=locale.getpreferredencoding(), errors='replace') + for msg in stream: + if prefix == '[!]' and ('it/s]' in msg or 's/it]' in msg) and ('%|' in msg or 'it [' in msg): + if msg.startswith('100%'): + print('\r' + msg, end="", file=sys.stderr), + else: + print('\r' + msg[:-1], end="", file=sys.stderr), + else: + if prefix == '[!]': + print(prefix, msg, end="", file=sys.stderr) + else: + print(prefix, msg, end="") + + +def process_wrap(cmd_str, cwd_path, handler=None): + process = subprocess.Popen(cmd_str, cwd=cwd_path, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1) + + if handler is None: + handler = handle_stream + + stdout_thread = threading.Thread(target=handler, args=(process.stdout, "")) + stderr_thread = threading.Thread(target=handler, args=(process.stderr, "[!]")) + + stdout_thread.start() + stderr_thread.start() + + stdout_thread.join() + stderr_thread.join() + + return process.wait() + + +try: + if '--port' in sys.argv: + port_index = sys.argv.index('--port') + if port_index + 1 < len(sys.argv): + port = int(sys.argv[port_index + 1]) + postfix = f"_{port}" + else: + postfix = "" + + # Logger setup + if enable_file_logging: + if os.path.exists(f"comfyui{postfix}.log"): + if os.path.exists(f"comfyui{postfix}.prev.log"): + if os.path.exists(f"comfyui{postfix}.prev2.log"): + os.remove(f"comfyui{postfix}.prev2.log") + os.rename(f"comfyui{postfix}.prev.log", f"comfyui{postfix}.prev2.log") + os.rename(f"comfyui{postfix}.log", f"comfyui{postfix}.prev.log") + + log_file = open(f"comfyui{postfix}.log", "w", encoding="utf-8", errors="ignore") + + log_lock = threading.Lock() + + original_stdout = sys.stdout + original_stderr = sys.stderr + + if original_stdout.encoding.lower() == 'utf-8': + write_stdout = original_stdout.write + write_stderr = original_stderr.write + else: + def wrapper_stdout(msg): + original_stdout.write(msg.encode('utf-8').decode(original_stdout.encoding, errors="ignore")) + + def wrapper_stderr(msg): + original_stderr.write(msg.encode('utf-8').decode(original_stderr.encoding, errors="ignore")) + + write_stdout = wrapper_stdout + write_stderr = wrapper_stderr + + pat_tqdm = r'\d+%.*\[(.*?)\]' + pat_import_fail = r'seconds \(IMPORT FAILED\):' + pat_custom_node = r'[/\\]custom_nodes[/\\](.*)$' + + is_start_mode = True + is_import_fail_mode = False + + class ComfyUIManagerLogger: + def __init__(self, is_stdout): + self.is_stdout = is_stdout + self.encoding = "utf-8" + self.last_char = '' + + def fileno(self): + try: + if self.is_stdout: + return original_stdout.fileno() + else: + return original_stderr.fileno() + except AttributeError: + # Handle error + raise ValueError("The object does not have a fileno method") + + def write(self, message): + global is_start_mode + global is_import_fail_mode + + if any(f(message) for f in message_collapses): + return + + if is_start_mode: + if is_import_fail_mode: + match = re.search(pat_custom_node, message) + if match: + import_failed_extensions.add(match.group(1)) + is_import_fail_mode = False + else: + match = re.search(pat_import_fail, message) + if match: + is_import_fail_mode = True + else: + is_import_fail_mode = False + + if 'Starting server' in message: + is_start_mode = False + + if not self.is_stdout: + match = re.search(pat_tqdm, message) + if match: + message = re.sub(r'([#|])\d', r'\1▌', message) + message = re.sub('#', '█', message) + if '100%' in message: + self.sync_write(message) + else: + write_stderr(message) + original_stderr.flush() + else: + self.sync_write(message) + else: + self.sync_write(message) + + def sync_write(self, message): + with log_lock: + timestamp = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')[:-3] + if self.last_char != '\n': + log_file.write(message) + else: + log_file.write(f"[{timestamp}] {message}") + log_file.flush() + self.last_char = message if message == '' else message[-1] + + with std_log_lock: + if self.is_stdout: + write_stdout(message) + original_stdout.flush() + terminal_hook.write_stderr(message) + else: + write_stderr(message) + original_stderr.flush() + terminal_hook.write_stdout(message) + + def flush(self): + log_file.flush() + + with std_log_lock: + if self.is_stdout: + original_stdout.flush() + else: + original_stderr.flush() + + def close(self): + self.flush() + + def reconfigure(self, *args, **kwargs): + pass + + # You can close through sys.stderr.close_log() + def close_log(self): + sys.stderr = original_stderr + sys.stdout = original_stdout + log_file.close() + + def close_log(): + sys.stderr = original_stderr + sys.stdout = original_stdout + log_file.close() + + + if enable_file_logging: + sys.stdout = ComfyUIManagerLogger(True) + sys.stderr = ComfyUIManagerLogger(False) + + atexit.register(close_log) + else: + sys.stdout.close_log = lambda: None + +except Exception as e: + print(f"[ComfyUI-Manager] Logging failed: {e}") + + +print("** ComfyUI startup time:", datetime.datetime.now()) +print("** Platform:", platform.system()) +print("** Python version:", sys.version) +print("** Python executable:", sys.executable) + +if enable_file_logging: + print("** Log path:", os.path.abspath('comfyui.log')) +else: + print("** Log path: file logging is disabled") + + +def check_bypass_ssl(): + try: + import configparser + import ssl + config_path = os.path.join(os.path.dirname(__file__), "config.ini") + config = configparser.ConfigParser() + config.read(config_path) + default_conf = config['default'] + + if 'bypass_ssl' in default_conf and default_conf['bypass_ssl'].lower() == 'true': + print(f"[ComfyUI-Manager] WARN: Unsafe - SSL verification bypass option is Enabled. (see ComfyUI-Manager/config.ini)") + ssl._create_default_https_context = ssl._create_unverified_context # SSL certificate error fix. + except Exception: + pass + + +check_bypass_ssl() + + +# Perform install +processed_install = set() +script_list_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "startup-scripts", "install-scripts.txt") +pip_list = None + + +def get_installed_packages(): + global pip_list + + if pip_list is None: + try: + result = subprocess.check_output([sys.executable, '-m', 'pip', 'list'], universal_newlines=True) + pip_list = set([line.split()[0].lower() for line in result.split('\n') if line.strip()]) + except subprocess.CalledProcessError as e: + print(f"[ComfyUI-Manager] Failed to retrieve the information of installed pip packages.") + return set() + + return pip_list + + +def is_installed(name): + name = name.strip() + + if name.startswith('#'): + return True + + pattern = r'([^<>!=]+)([<>!=]=?)' + match = re.search(pattern, name) + + if match: + name = match.group(1) + + return name.lower() in get_installed_packages() + + +if os.path.exists(restore_snapshot_path): + try: + import json + + cloned_repos = [] + + def msg_capture(stream, prefix): + stream.reconfigure(encoding=locale.getpreferredencoding(), errors='replace') + for msg in stream: + if msg.startswith("CLONE: "): + cloned_repos.append(msg[7:]) + if prefix == '[!]': + print(prefix, msg, end="", file=sys.stderr) + else: + print(prefix, msg, end="") + + elif prefix == '[!]' and ('it/s]' in msg or 's/it]' in msg) and ('%|' in msg or 'it [' in msg): + if msg.startswith('100%'): + print('\r' + msg, end="", file=sys.stderr), + else: + print('\r'+msg[:-1], end="", file=sys.stderr), + else: + if prefix == '[!]': + print(prefix, msg, end="", file=sys.stderr) + else: + print(prefix, msg, end="") + + print(f"[ComfyUI-Manager] Restore snapshot.") + cmd_str = [sys.executable, git_script_path, '--apply-snapshot', restore_snapshot_path] + exit_code = process_wrap(cmd_str, custom_nodes_path, handler=msg_capture) + + with open(restore_snapshot_path, 'r', encoding="UTF-8", errors="ignore") as json_file: + info = json.load(json_file) + for url in cloned_repos: + try: + repository_name = url.split("/")[-1].strip() + repo_path = os.path.join(custom_nodes_path, repository_name) + repo_path = os.path.abspath(repo_path) + + requirements_path = os.path.join(repo_path, 'requirements.txt') + install_script_path = os.path.join(repo_path, 'install.py') + + this_exit_code = 0 + + if os.path.exists(requirements_path): + with open(requirements_path, 'r', encoding="UTF-8", errors="ignore") as file: + for line in file: + package_name = line.strip() + if package_name and not is_installed(package_name): + install_cmd = [sys.executable, "-m", "pip", "install", package_name] + this_exit_code += process_wrap(install_cmd, repo_path) + + if os.path.exists(install_script_path) and f'{repo_path}/install.py' not in processed_install: + processed_install.add(f'{repo_path}/install.py') + install_cmd = [sys.executable, install_script_path] + print(f">>> {install_cmd} / {repo_path}") + this_exit_code += process_wrap(install_cmd, repo_path) + + if this_exit_code != 0: + print(f"[ComfyUI-Manager] Restoring '{repository_name}' is failed.") + + except Exception as e: + print(e) + print(f"[ComfyUI-Manager] Restoring '{repository_name}' is failed.") + + if exit_code != 0: + print(f"[ComfyUI-Manager] Restore snapshot failed.") + else: + print(f"[ComfyUI-Manager] Restore snapshot done.") + + except Exception as e: + print(e) + print(f"[ComfyUI-Manager] Restore snapshot failed.") + + os.remove(restore_snapshot_path) + + +def execute_lazy_install_script(repo_path, executable): + global processed_install + + install_script_path = os.path.join(repo_path, "install.py") + requirements_path = os.path.join(repo_path, "requirements.txt") + + if os.path.exists(requirements_path): + print(f"Install: pip packages for '{repo_path}'") + with open(requirements_path, "r") as requirements_file: + for line in requirements_file: + package_name = line.strip() + if package_name and not is_installed(package_name): + install_cmd = [executable, "-m", "pip", "install", package_name] + process_wrap(install_cmd, repo_path) + + if os.path.exists(install_script_path) and f'{repo_path}/install.py' not in processed_install: + processed_install.add(f'{repo_path}/install.py') + print(f"Install: install script for '{repo_path}'") + install_cmd = [executable, "install.py"] + process_wrap(install_cmd, repo_path) + + +# Check if script_list_path exists +if os.path.exists(script_list_path): + print("\n#######################################################################") + print("[ComfyUI-Manager] Starting dependency installation/(de)activation for the extension\n") + + executed = set() + # Read each line from the file and convert it to a list using eval + with open(script_list_path, 'r', encoding="UTF-8", errors="ignore") as file: + for line in file: + if line in executed: + continue + + executed.add(line) + + try: + script = eval(line) + + if script[1].startswith('#') and script[1] != '#FORCE': + if script[1] == "#LAZY-INSTALL-SCRIPT": + execute_lazy_install_script(script[0], script[2]) + + elif os.path.exists(script[0]): + if script[1] == "#FORCE": + del script[1] + else: + if 'pip' in script[1:] and 'install' in script[1:] and is_installed(script[-1]): + continue + + print(f"\n## ComfyUI-Manager: EXECUTE => {script[1:]}") + print(f"\n## Execute install/(de)activation script for '{script[0]}'") + + exit_code = process_wrap(script[1:], script[0]) + + if exit_code != 0: + print(f"install/(de)activation script failed: {script[0]}") + else: + print(f"\n## ComfyUI-Manager: CANCELED => {script[1:]}") + + except Exception as e: + print(f"[ERROR] Failed to execute install/(de)activation script: {line} / {e}") + + # Remove the script_list_path file + if os.path.exists(script_list_path): + os.remove(script_list_path) + + print("\n[ComfyUI-Manager] Startup script completed.") + print("#######################################################################\n") + +del processed_install +del pip_list + + +def check_windows_event_loop_policy(): + try: + import configparser + config_path = os.path.join(os.path.dirname(__file__), "config.ini") + config = configparser.ConfigParser() + config.read(config_path) + default_conf = config['default'] + + if 'windows_selector_event_loop_policy' in default_conf and default_conf['windows_selector_event_loop_policy'].lower() == 'true': + try: + import asyncio + import asyncio.windows_events + asyncio.set_event_loop_policy(asyncio.windows_events.WindowsSelectorEventLoopPolicy()) + print(f"[ComfyUI-Manager] Windows event loop policy mode enabled") + except Exception as e: + print(f"[ComfyUI-Manager] WARN: Windows initialization fail: {e}") + except Exception: + pass + + +if platform.system() == 'Windows': + check_windows_event_loop_policy() diff --git a/custom_nodes/ComfyUI-Manager/requirements.txt b/custom_nodes/ComfyUI-Manager/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..2435feff83d12d96797c42d02e55c3224cba3b9d --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/requirements.txt @@ -0,0 +1,4 @@ +GitPython +matrix-client==0.4.0 +transformers +huggingface-hub>0.20 \ No newline at end of file diff --git a/custom_nodes/ComfyUI-Manager/scan.sh b/custom_nodes/ComfyUI-Manager/scan.sh new file mode 100755 index 0000000000000000000000000000000000000000..1b3cc3771790ebedf0c538ff3464125d52e8668c --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scan.sh @@ -0,0 +1,7 @@ +#!/bin/bash +rm ~/.tmp/default/*.py > /dev/null 2>&1 +python scanner.py ~/.tmp/default +cp extension-node-map.json node_db/new/. + +echo Integrity check +./check.sh diff --git a/custom_nodes/ComfyUI-Manager/scanner.py b/custom_nodes/ComfyUI-Manager/scanner.py new file mode 100644 index 0000000000000000000000000000000000000000..8b8d4de2aa72325c3f5c9d1203cb10bc79bf02fc --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scanner.py @@ -0,0 +1,336 @@ +import ast +import re +import os +import json +from git import Repo +from torchvision.datasets.utils import download_url +import concurrent + +builtin_nodes = set() + +import sys + + +# prepare temp dir +if len(sys.argv) > 1: + temp_dir = sys.argv[1] +else: + temp_dir = os.path.join(os.getcwd(), ".tmp") + +if not os.path.exists(temp_dir): + os.makedirs(temp_dir) + +print(f"TEMP DIR: {temp_dir}") + + +def extract_nodes(code_text): + try: + parsed_code = ast.parse(code_text) + + assignments = (node for node in parsed_code.body if isinstance(node, ast.Assign)) + + for assignment in assignments: + if isinstance(assignment.targets[0], ast.Name) and assignment.targets[0].id == 'NODE_CLASS_MAPPINGS': + node_class_mappings = assignment.value + break + else: + node_class_mappings = None + + if node_class_mappings: + s = set([key.s.strip() for key in node_class_mappings.keys if key is not None]) + return s + else: + return set() + except: + return set() + + +# scan +def scan_in_file(filename, is_builtin=False): + global builtin_nodes + + try: + with open(filename, encoding='utf-8') as file: + code = file.read() + except UnicodeDecodeError: + with open(filename, encoding='cp949') as file: + code = file.read() + + pattern = r"_CLASS_MAPPINGS\s*=\s*{([^}]*)}" + regex = re.compile(pattern, re.MULTILINE | re.DOTALL) + + nodes = set() + class_dict = {} + + nodes |= extract_nodes(code) + + pattern2 = r'^[^=]*_CLASS_MAPPINGS\["(.*?)"\]' + keys = re.findall(pattern2, code) + for key in keys: + nodes.add(key.strip()) + + pattern3 = r'^[^=]*_CLASS_MAPPINGS\[\'(.*?)\'\]' + keys = re.findall(pattern3, code) + for key in keys: + nodes.add(key.strip()) + + matches = regex.findall(code) + for match in matches: + dict_text = match + + key_value_pairs = re.findall(r"\"([^\"]*)\"\s*:\s*([^,\n]*)", dict_text) + for key, value in key_value_pairs: + class_dict[key.strip()] = value.strip() + + key_value_pairs = re.findall(r"'([^']*)'\s*:\s*([^,\n]*)", dict_text) + for key, value in key_value_pairs: + class_dict[key.strip()] = value.strip() + + for key, value in class_dict.items(): + nodes.add(key.strip()) + + update_pattern = r"_CLASS_MAPPINGS.update\s*\({([^}]*)}\)" + update_match = re.search(update_pattern, code) + if update_match: + update_dict_text = update_match.group(1) + update_key_value_pairs = re.findall(r"\"([^\"]*)\"\s*:\s*([^,\n]*)", update_dict_text) + for key, value in update_key_value_pairs: + class_dict[key.strip()] = value.strip() + nodes.add(key.strip()) + + metadata = {} + lines = code.strip().split('\n') + for line in lines: + if line.startswith('@'): + if line.startswith("@author:") or line.startswith("@title:") or line.startswith("@nickname:") or line.startswith("@description:"): + key, value = line[1:].strip().split(':', 1) + metadata[key.strip()] = value.strip() + + if is_builtin: + builtin_nodes += set(nodes) + else: + for x in builtin_nodes: + if x in nodes: + nodes.remove(x) + + return nodes, metadata + + +def get_py_file_paths(dirname): + file_paths = [] + + for root, dirs, files in os.walk(dirname): + if ".git" in root or "__pycache__" in root: + continue + + for file in files: + if file.endswith(".py"): + file_path = os.path.join(root, file) + file_paths.append(file_path) + + return file_paths + + +def get_nodes(target_dir): + py_files = [] + directories = [] + + for item in os.listdir(target_dir): + if ".git" in item or "__pycache__" in item: + continue + + path = os.path.abspath(os.path.join(target_dir, item)) + + if os.path.isfile(path) and item.endswith(".py"): + py_files.append(path) + elif os.path.isdir(path): + directories.append(path) + + return py_files, directories + + +def get_git_urls_from_json(json_file): + with open(json_file, encoding='utf-8') as file: + data = json.load(file) + + custom_nodes = data.get('custom_nodes', []) + git_clone_files = [] + for node in custom_nodes: + if node.get('install_type') == 'git-clone': + files = node.get('files', []) + if files: + git_clone_files.append((files[0], node.get('title'), node.get('nodename_pattern'))) + + git_clone_files.append(("https://github.com/comfyanonymous/ComfyUI", "ComfyUI", None)) + + return git_clone_files + + +def get_py_urls_from_json(json_file): + with open(json_file, encoding='utf-8') as file: + data = json.load(file) + + custom_nodes = data.get('custom_nodes', []) + py_files = [] + for node in custom_nodes: + if node.get('install_type') == 'copy': + files = node.get('files', []) + if files: + py_files.append((files[0], node.get('title'), node.get('nodename_pattern'))) + + return py_files + + +def clone_or_pull_git_repository(git_url): + repo_name = git_url.split("/")[-1].split(".")[0] + repo_dir = os.path.join(temp_dir, repo_name) + + if os.path.exists(repo_dir): + try: + repo = Repo(repo_dir) + origin = repo.remote(name="origin") + origin.pull(rebase=True) + repo.git.submodule('update', '--init', '--recursive') + print(f"Pulling {repo_name}...") + except Exception as e: + print(f"Pulling {repo_name} failed: {e}") + else: + try: + Repo.clone_from(git_url, repo_dir, recursive=True) + print(f"Cloning {repo_name}...") + except Exception as e: + print(f"Cloning {repo_name} failed: {e}") + + +def update_custom_nodes(): + if not os.path.exists(temp_dir): + os.makedirs(temp_dir) + + node_info = {} + + git_url_titles = get_git_urls_from_json('custom-node-list.json') + + def process_git_url_title(url, title, node_pattern): + name = os.path.basename(url) + if name.endswith(".git"): + name = name[:-4] + + node_info[name] = (url, title, node_pattern) + clone_or_pull_git_repository(url) + + with concurrent.futures.ThreadPoolExecutor(10) as executor: + for url, title, node_pattern in git_url_titles: + executor.submit(process_git_url_title, url, title, node_pattern) + + py_url_titles_and_pattern = get_py_urls_from_json('custom-node-list.json') + + def download_and_store_info(url_title_and_pattern): + url, title, node_pattern = url_title_and_pattern + name = os.path.basename(url) + if name.endswith(".py"): + node_info[name] = (url, title, node_pattern) + + try: + download_url(url, temp_dir) + except: + print(f"[ERROR] Cannot download '{url}'") + + with concurrent.futures.ThreadPoolExecutor(10) as executor: + executor.map(download_and_store_info, py_url_titles_and_pattern) + + return node_info + + +def gen_json(node_info): + # scan from .py file + node_files, node_dirs = get_nodes(temp_dir) + + comfyui_path = os.path.abspath(os.path.join(temp_dir, "ComfyUI")) + node_dirs.remove(comfyui_path) + node_dirs = [comfyui_path] + node_dirs + + data = {} + for dirname in node_dirs: + py_files = get_py_file_paths(dirname) + metadata = {} + + nodes = set() + for py in py_files: + nodes_in_file, metadata_in_file = scan_in_file(py, dirname == "ComfyUI") + nodes.update(nodes_in_file) + metadata.update(metadata_in_file) + + dirname = os.path.basename(dirname) + + if len(nodes) > 0 or (dirname in node_info and node_info[dirname][2] is not None): + nodes = list(nodes) + nodes.sort() + + if dirname in node_info: + git_url, title, node_pattern = node_info[dirname] + metadata['title_aux'] = title + if node_pattern is not None: + metadata['nodename_pattern'] = node_pattern + data[git_url] = (nodes, metadata) + else: + print(f"WARN: {dirname} is removed from custom-node-list.json") + + for file in node_files: + nodes, metadata = scan_in_file(file) + + if len(nodes) > 0 or (dirname in node_info and node_info[dirname][2] is not None): + nodes = list(nodes) + nodes.sort() + + file = os.path.basename(file) + + if file in node_info: + url, title, node_pattern = node_info[file] + metadata['title_aux'] = title + if node_pattern is not None: + metadata['nodename_pattern'] = node_pattern + data[url] = (nodes, metadata) + else: + print(f"Missing info: {file}") + + # scan from node_list.json file + extensions = [name for name in os.listdir(temp_dir) if os.path.isdir(os.path.join(temp_dir, name))] + + for extension in extensions: + node_list_json_path = os.path.join(temp_dir, extension, 'node_list.json') + if os.path.exists(node_list_json_path): + git_url, title, node_pattern = node_info[extension] + + with open(node_list_json_path, 'r', encoding='utf-8') as f: + node_list_json = json.load(f) + + metadata_in_url = {} + if git_url not in data: + nodes = set() + else: + nodes_in_url, metadata_in_url = data[git_url] + nodes = set(nodes_in_url) + + for x, desc in node_list_json.items(): + nodes.add(x.strip()) + + metadata_in_url['title_aux'] = title + if node_pattern is not None: + metadata_in_url['nodename_pattern'] = node_pattern + nodes = list(nodes) + nodes.sort() + data[git_url] = (nodes, metadata_in_url) + + json_path = f"extension-node-map.json" + with open(json_path, "w", encoding='utf-8') as file: + json.dump(data, file, indent=4, sort_keys=True) + + +print("### ComfyUI Manager Node Scanner ###") + +print("\n# Updating extensions\n") +updated_node_info = update_custom_nodes() + +print("\n# 'extension-node-map.json' file is generated.\n") +gen_json(updated_node_info) + diff --git a/custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py b/custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py new file mode 100644 index 0000000000000000000000000000000000000000..d5a70ed6dd92ba90e8084e07fbb9097fe3096ea5 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py @@ -0,0 +1,39 @@ +import os +import subprocess + + +def get_enabled_subdirectories_with_files(base_directory): + subdirs_with_files = [] + for subdir in os.listdir(base_directory): + try: + full_path = os.path.join(base_directory, subdir) + if os.path.isdir(full_path) and not subdir.endswith(".disabled") and not subdir.startswith('.') and subdir != '__pycache__': + print(f"## Install dependencies for '{subdir}'") + requirements_file = os.path.join(full_path, "requirements.txt") + install_script = os.path.join(full_path, "install.py") + + if os.path.exists(requirements_file) or os.path.exists(install_script): + subdirs_with_files.append((full_path, requirements_file, install_script)) + except Exception as e: + print(f"EXCEPTION During Dependencies INSTALL on '{subdir}':\n{e}") + + return subdirs_with_files + + +def install_requirements(requirements_file_path): + if os.path.exists(requirements_file_path): + subprocess.run(["pip", "install", "-r", requirements_file_path]) + + +def run_install_script(install_script_path): + if os.path.exists(install_script_path): + subprocess.run(["python", install_script_path]) + + +custom_nodes_directory = "custom_nodes" +subdirs_with_files = get_enabled_subdirectories_with_files(custom_nodes_directory) + + +for subdir, requirements_file, install_script in subdirs_with_files: + install_requirements(requirements_file) + run_install_script(install_script) diff --git a/custom_nodes/ComfyUI-Manager/scripts/install-comfyui-venv-linux.sh b/custom_nodes/ComfyUI-Manager/scripts/install-comfyui-venv-linux.sh new file mode 100755 index 0000000000000000000000000000000000000000..be473dc66f8eeb36c48d409945eb5ae83a030171 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scripts/install-comfyui-venv-linux.sh @@ -0,0 +1,21 @@ +git clone https://github.com/comfyanonymous/ComfyUI +cd ComfyUI/custom_nodes +git clone https://github.com/ltdrdata/ComfyUI-Manager +cd .. +python -m venv venv +source venv/bin/activate +python -m pip install -r requirements.txt +python -m pip install -r custom_nodes/ComfyUI-Manager/requirements.txt +python -m pip install torchvision +cd .. +echo "#!/bin/bash" > run_gpu.sh +echo "cd ComfyUI" >> run_gpu.sh +echo "source venv/bin/activate" >> run_gpu.sh +echo "python main.py --preview-method auto" >> run_gpu.sh +chmod +x run_gpu.sh + +echo "#!/bin/bash" > run_cpu.sh +echo "cd ComfyUI" >> run_cpu.sh +echo "source venv/bin/activate" >> run_cpu.sh +echo "python main.py --preview-method auto --cpu" >> run_cpu.sh +chmod +x run_cpu.sh diff --git a/custom_nodes/ComfyUI-Manager/scripts/install-comfyui-venv-win.bat b/custom_nodes/ComfyUI-Manager/scripts/install-comfyui-venv-win.bat new file mode 100755 index 0000000000000000000000000000000000000000..6bb0e8364b5170530c2a85341ad754764c6788ae --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scripts/install-comfyui-venv-win.bat @@ -0,0 +1,20 @@ +git clone https://github.com/comfyanonymous/ComfyUI +cd ComfyUI/custom_nodes +git clone https://github.com/ltdrdata/ComfyUI-Manager +cd .. +python -m venv venv +call venv/Scripts/activate +python -m pip install -r requirements.txt +python -m pip install -r custom_nodes/ComfyUI-Manager/requirements.txt +python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu118 xformers +cd .. +echo "cd ComfyUI" >> run_gpu.sh +echo "call venv/Scripts/activate" >> run_gpu.sh +echo "python main.py" >> run_gpu.sh +chmod +x run_gpu.sh + +echo "#!/bin/bash" > run_cpu.sh +echo "cd ComfyUI" >> run_cpu.sh +echo "call venv/Scripts/activate" >> run_cpu.sh +echo "python main.py --cpu" >> run_cpu.sh +chmod +x run_cpu.sh diff --git a/custom_nodes/ComfyUI-Manager/scripts/install-manager-for-portable-version.bat b/custom_nodes/ComfyUI-Manager/scripts/install-manager-for-portable-version.bat new file mode 100644 index 0000000000000000000000000000000000000000..7b067dfd770d197ccd68e760087536552223f260 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scripts/install-manager-for-portable-version.bat @@ -0,0 +1,2 @@ +.\python_embeded\python.exe -s -m pip install gitpython +.\python_embeded\python.exe -c "import git; git.Repo.clone_from('https://github.com/ltdrdata/ComfyUI-Manager', './ComfyUI/custom_nodes/ComfyUI-Manager')" diff --git a/custom_nodes/ComfyUI-Manager/scripts/update-fix.py b/custom_nodes/ComfyUI-Manager/scripts/update-fix.py new file mode 100644 index 0000000000000000000000000000000000000000..d2ac10074607544d0b9cdaf4372e43c7f62bb8d0 --- /dev/null +++ b/custom_nodes/ComfyUI-Manager/scripts/update-fix.py @@ -0,0 +1,12 @@ +import git + +commit_hash = "a361cc1" + +repo = git.Repo('.') + +if repo.is_dirty(): + repo.git.stash() + +repo.git.update_ref("refs/remotes/origin/main", commit_hash) +repo.remotes.origin.fetch() +repo.git.pull("origin", "main") diff --git a/custom_nodes/ComfyUI-Manager/snapshots/the_snapshot_files_are_located_here b/custom_nodes/ComfyUI-Manager/snapshots/the_snapshot_files_are_located_here new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/LICENSE b/custom_nodes/ComfyUI-VideoHelperSuite/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..f288702d2fa16d3cdf0035b15a9fcbc552cd88e7 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/README.md b/custom_nodes/ComfyUI-VideoHelperSuite/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3b21911a54cf6f006186fc351edef71a643f0a9a --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/README.md @@ -0,0 +1,110 @@ +# ComfyUI-VideoHelperSuite +Nodes related to video workflows + +## I/O Nodes +### Load Video +Converts a video file into a series of images +- video: The video file to be loaded +- force_rate: Discards or duplicates frames as needed to hit a target frame rate. Disabled by setting to 0. This can be used to quickly match a suggested frame rate like the 8 fps of AnimateDiff. +- force_size: Allows for quick resizing to a number of suggested sizes. Several options allow you to set only width or height and determine the other from aspect ratio. +- frame_load_cap: The maximum number of frames which will be returned. This could also be thought of as the maximum batch size. +- skip_first_frames: How many frames to skip from the start of the video after adjusting for a forced frame rate. By incrementing this number by the frame_load_cap, you can easily process a longer input video in parts. +- select_every_nth: Allows for skipping a number of frames without considering the base frame rate or risking frame duplication. Often useful when working with animated gifs +A path variant of the Load Video node exists that allows loading videos from external paths +![step](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite/assets/4284322/b5fc993c-5c9b-4608-afa4-48ae2e1380ef) +![resize](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite/assets/4284322/98d2e78e-1c44-443c-a8fe-0dab0b5947f3) +If [Advanced Previews](#advanced-previews) is enabled in the options menu of the web ui, the preview will reflect the current settings on the node. +### Load Image Sequence +Loads all image files from a subfolder. Options are similar to Load Video. +- image_load_cap: The maximum number of images which will be returned. This could also be thought of as the maximum batch size. +- skip_first_images: How many images to skip. By incrementing this number by image_load_cap, you can easily divide a long sequence of images into multiple batches. +- select_every_nth: Allows for skipping a number of images between every returned frame. + +A path variant of Load Image sequence also exists. +### Video Combine +Combines a series of images into an output video +If the optional audio input is provided, it will also be combined into the output video +- frame_rate: How many of the input frames are displayed per second. A higher frame rate means that the output video plays faster and has less duration. This should usually be kept to 8 for AnimateDiff, or matched to the force_rate of a Load Video node. +- loop_count: How many additional times the video should repeat +- filename_prefix: The base file name used for output. + - You can save output to a subfolder: `subfolder/video` + - Like the builtin Save Image node, you can add timestamps. `%date:yyyy-MM-ddThh:mm:ss%` might become 2023-10-31T6:45:25 +- format: The file format to use. Advanced information on configuring or adding additional video formats can be found in the [Video Formats](#video-formats) section. +- pingpong: Causes the input to be played back in the reverse to create a clean loop. +- save_output: Whether the image should be put into the output directory or the temp directory. +Returns: a `VHS_FILENAMES` which consists of a boolean indicating if save_output is enabled and a list of the full filepaths of all generated outputs in the order created. Accordingly `output[1][-1]` will be the most complete output. + +Depending on the format chosen, additional options may become available, including +- crf: Describes the quality of the output video. A lower number gives a higher quality video and a larger file size, while a higher number gives a lower quality video with a smaller size. Scaling varies by codec, but visually lossless output generally occurs around 20. +- save_metadata: Includes a copy of the workflow in the ouput video which can be loaded by dragging and dropping the video, just like with images. +- pix_fmt: Changes how the pixel data is stored. `yuv420p10le` has higher color quality, but won't work on all devices +### Load Audio +Provides a way to load standalone audio files. +- seek_seconds: An optional start time for the audio file in seconds. + +## Latent/Image Nodes +A number of utility nodes exist for managing latents. For each, there is an equivalent node which works on images. +### Split Batch +Divides the latents into two sets. The first `split_index` latents go to ouput A and the remainder to output B. If less then `split_index` latents are provided as input, all are passed to output A and output B is empty. +### Merge Batch +Combines two groups of latents into a single output. The order of the output is the latents in A followed by the latents in B. +If the input groups are not the same size, the node provides options for rescaling the latents before merging. +### Select Every Nth +The first of every `select_every_nth` input is passed and the remainder are discarded +### Get Count +### Duplicate Batch + +## Video Previews +Load Video (Upload), Load Video (Path), Load Images (Upload), Load Images (Path) and Video Combine provide animated previews. +Nodes with previews provide additional functionality when right clicked +- Open preview +- Save preview +- Pause preview: Can improve performance with very large videos +- Hide preview: Can improve performance, save space +- Sync preview: Restarts all previews for side-by-side comparisons + +### Advanced Previews +Advanced Previews must be manually enabled by clicking the settings gear next to Queue Prompt and checking the box for VHS Advanced Previews. +If enabled, videos which are displayed in the ui will be converted with ffmpeg on request. This has several benefits +- Previews for Load Video nodes will reflect the settings on the node such as skip_first_frames and frame_load_cap + - This makes it easy to select an exact portion of an input video and sync it with outputs +- It can use substantially less bandwidth if running the server remotely +- It can greatly improve the browser performance by downsizing videos to the in ui resolution, particularly useful with animated gifs +- It allows for previews of videos that would not normally be playable in browser. +- Can be limited to subdirectories of ComyUI if `VHS_STRICT_PATHS` is set as an environment variable. + +This fucntionality is disabled since it comes with several downsides +- There is a delay before videos show in the browser. This delay can become quite large if the input video is long +- The preview videos are lower quality (The original can always be viewed with Right Click -> Open preview) + +## Video Formats +Those familiar with ffmpeg are able to add json files to the video_formats folders to add new output types to Video Combine. +Consider the following example for av1-webm +```json +{ + "main_pass": + [ + "-n", "-c:v", "libsvtav1", + "-pix_fmt", "yuv420p10le", + "-crf", ["crf","INT", {"default": 23, "min": 0, "max": 100, "step": 1}] + ], + "audio_pass": ["-c:a", "libopus"], + "extension": "webm", + "environment": {"SVT_LOG": "1"} +} +``` +Most configuration takes place in `main_pass`, which is a list of arguments that are passed to ffmpeg. +- `"-n"` designates that the command should fail if a file of the same name already exists. This should never happen, but if some bug were to occur, it would ensure other files aren't overwritten. +- `"-c:v", "libsvtav1"` designates that the video should be encoded with an av1 codec using the new SVT-AV1 encoder. SVT-AV1 is much faster than libaom-av1, but may not exist in older versions of ffmpeg. Alternatively, av1_nvenc could be used for gpu encoding with newer nvidia cards. +- `"-pix_fmt", "yuv420p10le"` designates the standard pixel format with 10-bit color. It's important that some pixel format be specified to ensure a nonconfigurable input pix_fmt isn't used. + +`audio pass` contains a list of arguments which are passed to ffmpeg when audio is passed into Video Combine + +`extension` designates both the file extension and the container format that is used. If some of the above options are omitted from `main_pass` it can affect what default options are chosen. +`environment` can optionally be provided to set environment variables during execution. For av1 it's used to reduce the verbosity of logging so that only major errors are displayed. +`input_color_depth` effects the format in which pixels are passed to the ffmpeg subprocess. Current valid options are `8bit` and `16bit`. The later will produce higher quality output, but is experimental. + +Fields can be exposed in the webui as a widget using a format similar to what is used in the creation of custom nodes. In the above example, the argument for `-crf` will be exposed as a format widget in the webui. Format widgets are a list of up to 3 terms +- The name of the widget that will be displayed in the web ui +- Either a primitive such as "INT" or "BOOLEAN", or a list of string options +- A dictionary of options diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/__init__.py b/custom_nodes/ComfyUI-VideoHelperSuite/__init__.py new file mode 100755 index 0000000000000000000000000000000000000000..cae39593a7307fe8dd8a9055e643fd572fb988e8 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/__init__.py @@ -0,0 +1,6 @@ +from .videohelpersuite.nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS +import folder_paths +from .videohelpersuite.server import server + +WEB_DIRECTORY = "./web" +__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"] diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/requirements.txt b/custom_nodes/ComfyUI-VideoHelperSuite/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fa34aa21b85c4b974e2a2b6891eae5fd6dd4164 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/requirements.txt @@ -0,0 +1,2 @@ +opencv-python +imageio-ffmpeg diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/16bit-png.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/16bit-png.json new file mode 100644 index 0000000000000000000000000000000000000000..b768bdbcfe8950cb7bde37d02444f0191ad29f51 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/16bit-png.json @@ -0,0 +1,9 @@ +{ + "main_pass": + [ + "-n", + "-pix_fmt", "rgba64" + ], + "input_color_depth": "16bit", + "extension": "%03d.png" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/ProRes.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/ProRes.json new file mode 100644 index 0000000000000000000000000000000000000000..84ff1fe38e9aa610c98b99b5987b34132fd21e5c --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/ProRes.json @@ -0,0 +1,10 @@ +{ + "main_pass": + [ + "-n", "-c:v", "prores_ks", + "-profile:v","3", + "-pix_fmt", "yuv422p10" + ], + "audio_pass": ["-c:a", "pcm_s16le"], + "extension": "mov" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/av1-webm.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/av1-webm.json new file mode 100644 index 0000000000000000000000000000000000000000..ceb53b4dae01df7a016a8859e2ccc53d9d849038 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/av1-webm.json @@ -0,0 +1,13 @@ +{ + "main_pass": + [ + "-n", "-c:v", "libsvtav1", + "-pix_fmt", ["pix_fmt", ["yuv420p10le", "yuv420p"]], + "-crf", ["crf","INT", {"default": 23, "min": 0, "max": 100, "step": 1}] + ], + "audio_pass": ["-c:a", "libopus"], + "input_color_depth": ["input_color_depth", ["8bit", "16bit"]], + "save_metadata": ["save_metadata", "BOOLEAN", {"default": true}], + "extension": "webm", + "environment": {"SVT_LOG": "1"} +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/gifski.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/gifski.json new file mode 100644 index 0000000000000000000000000000000000000000..27a06ff732718a6fd9af0b727ba495f8df024625 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/gifski.json @@ -0,0 +1,13 @@ +{ + "main_pass": + [ + "-n", + "-pix_fmt", "yuv420p", + "-crf", "20", + "-b:v", "0" + ], + "extension": "webm", + "gifski_pass": [ + "-Q", ["quality","INT", {"default": 90, "min": 1, "max": 100, "step": 1}] + ] +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/h264-mp4.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/h264-mp4.json new file mode 100644 index 0000000000000000000000000000000000000000..c860f921c32231996a6fe7355172e65a214b00ad --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/h264-mp4.json @@ -0,0 +1,11 @@ +{ + "main_pass": + [ + "-n", "-c:v", "libx264", + "-pix_fmt", ["pix_fmt", ["yuv420p", "yuv420p10le"]], + "-crf", ["crf","INT", {"default": 19, "min": 0, "max": 100, "step": 1}] + ], + "audio_pass": ["-c:a", "aac"], + "save_metadata": ["save_metadata", "BOOLEAN", {"default": true}], + "extension": "mp4" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/h265-mp4.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/h265-mp4.json new file mode 100644 index 0000000000000000000000000000000000000000..7fe0218b23b2a8bd330b4bb6ee2f96c8007a4cd0 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/h265-mp4.json @@ -0,0 +1,14 @@ +{ + "main_pass": + [ + "-n", "-c:v", "libx265", + "-vtag", "hvc1", + "-pix_fmt", ["pix_fmt", ["yuv420p10le", "yuv420p"]], + "-crf", ["crf","INT", {"default": 22, "min": 0, "max": 100, "step": 1}], + "-preset", "medium", + "-x265-params", "log-level=quiet" + ], + "audio_pass": ["-c:a", "aac"], + "save_metadata": ["save_metadata", "BOOLEAN", {"default": true}], + "extension": "mp4" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/nvenc_h264-mp4.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/nvenc_h264-mp4.json new file mode 100644 index 0000000000000000000000000000000000000000..4253a7c9b81bb9ae30c26ef06fcfcc6b80040044 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/nvenc_h264-mp4.json @@ -0,0 +1,12 @@ +{ + "main_pass": + [ + "-n", "-c:v", "h264_nvenc", + "-pix_fmt", ["pix_fmt", ["yuv420p", "yuv420p10le"]] + ], + "audio_pass": ["-c:a", "aac"], + "bitrate": ["bitrate","INT", {"default": 10, "min": 1, "max": 999, "step": 1 }], + "megabit": ["megabit","BOOLEAN", {"default": true}], + "save_metadata": ["save_metadata", "BOOLEAN", {"default": true}], + "extension": "mp4" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/nvenc_hevc-mp4.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/nvenc_hevc-mp4.json new file mode 100644 index 0000000000000000000000000000000000000000..e412ca1cda10e77a0c5d93f0ac1f403593630227 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/nvenc_hevc-mp4.json @@ -0,0 +1,13 @@ +{ + "main_pass": + [ + "-n", "-c:v", "hevc_nvenc", + "-vtag", "hvc1", + "-pix_fmt", ["pix_fmt", ["yuv420p", "yuv420p10le"]] + ], + "audio_pass": ["-c:a", "aac"], + "bitrate": ["bitrate","INT", {"default": 10, "min": 1, "max": 999, "step": 1 }], + "megabit": ["megabit","BOOLEAN", {"default": true}], + "save_metadata": ["save_metadata", "BOOLEAN", {"default": true}], + "extension": "mp4" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/webm.json b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/webm.json new file mode 100644 index 0000000000000000000000000000000000000000..66eacb1144704d05680dab38d59d399f966bc723 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/video_formats/webm.json @@ -0,0 +1,12 @@ +{ + "main_pass": + [ + "-n", + "-pix_fmt", "yuv420p", + "-crf", ["crf","INT", {"default": 20, "min": 0, "max": 100, "step": 1}], + "-b:v", "0" + ], + "audio_pass": ["-c:a", "libvorbis"], + "save_metadata": ["save_metadata", "BOOLEAN", {"default": true}], + "extension": "webm" +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/batched_nodes.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/batched_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..c627ef913b02bd7c7a92186685d4d1904b63cdf7 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/batched_nodes.py @@ -0,0 +1,48 @@ +import torch +from nodes import VAEEncode + + +class VAEDecodeBatched: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "samples": ("LATENT", ), + "vae": ("VAE", ), + "per_batch": ("INT", {"default": 16, "min": 1}) + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/batched nodes" + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "decode" + + def decode(self, vae, samples, per_batch): + decoded = [] + for start_idx in range(0, samples["samples"].shape[0], per_batch): + decoded.append(vae.decode(samples["samples"][start_idx:start_idx+per_batch])) + return (torch.cat(decoded, dim=0), ) + + +class VAEEncodeBatched: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "pixels": ("IMAGE", ), "vae": ("VAE", ), + "per_batch": ("INT", {"default": 16, "min": 1}) + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/batched nodes" + + RETURN_TYPES = ("LATENT",) + FUNCTION = "encode" + + def encode(self, vae, pixels, per_batch): + t = [] + for start_idx in range(0, pixels.shape[0], per_batch): + sub_pixels = VAEEncode.vae_encode_crop_pixels(pixels[start_idx:start_idx+per_batch]) + t.append(vae.encode(sub_pixels[:,:,:,:3])) + return ({"samples": torch.cat(t, dim=0)}, ) diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/image_latent_nodes.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/image_latent_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..a89a56ad278586235c47cf63affca2bd8d64076e --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/image_latent_nodes.py @@ -0,0 +1,458 @@ +from torch import Tensor +import torch + +import comfy.utils + +from .utils import BIGMIN, BIGMAX + + +class MergeStrategies: + MATCH_A = "match A" + MATCH_B = "match B" + MATCH_SMALLER = "match smaller" + MATCH_LARGER = "match larger" + + list_all = [MATCH_A, MATCH_B, MATCH_SMALLER, MATCH_LARGER] + + +class ScaleMethods: + NEAREST_EXACT = "nearest-exact" + BILINEAR = "bilinear" + AREA = "area" + BICUBIC = "bicubic" + BISLERP = "bislerp" + + list_all = [NEAREST_EXACT, BILINEAR, AREA, BICUBIC, BISLERP] + + +class CropMethods: + DISABLED = "disabled" + CENTER = "center" + + list_all = [DISABLED, CENTER] + + +class SplitLatents: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "latents": ("LATENT",), + "split_index": ("INT", {"default": 0, "step": 1, "min": BIGMIN, "max": BIGMAX}), + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/latent" + + RETURN_TYPES = ("LATENT", "INT", "LATENT", "INT") + RETURN_NAMES = ("LATENT_A", "A_count", "LATENT_B", "B_count") + FUNCTION = "split_latents" + + def split_latents(self, latents: dict, split_index: int): + latents = latents.copy() + group_a = latents["samples"][:split_index] + group_b = latents["samples"][split_index:] + group_a_latent = {"samples": group_a} + group_b_latent = {"samples": group_b} + return (group_a_latent, group_a.size(0), group_b_latent, group_b.size(0)) + + +class SplitImages: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "images": ("IMAGE",), + "split_index": ("INT", {"default": 0, "step": 1, "min": BIGMIN, "max": BIGMAX}), + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/image" + + RETURN_TYPES = ("IMAGE", "INT", "IMAGE", "INT") + RETURN_NAMES = ("IMAGE_A", "A_count", "IMAGE_B", "B_count") + FUNCTION = "split_images" + + def split_images(self, images: Tensor, split_index: int): + group_a = images[:split_index] + group_b = images[split_index:] + return (group_a, group_a.size(0), group_b, group_b.size(0)) + + +class SplitMasks: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + "split_index": ("INT", {"default": 0, "step": 1, "min": BIGMIN, "max": BIGMAX}), + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/mask" + + RETURN_TYPES = ("MASK", "INT", "MASK", "INT") + RETURN_NAMES = ("MASK_A", "A_count", "MASK_B", "B_count") + FUNCTION = "split_masks" + + def split_masks(self, mask: Tensor, split_index: int): + group_a = mask[:split_index] + group_b = mask[split_index:] + return (group_a, group_a.size(0), group_b, group_b.size(0)) + + +class MergeLatents: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "latents_A": ("LATENT",), + "latents_B": ("LATENT",), + "merge_strategy": (MergeStrategies.list_all,), + "scale_method": (ScaleMethods.list_all,), + "crop": (CropMethods.list_all,), + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/latent" + + RETURN_TYPES = ("LATENT", "INT",) + RETURN_NAMES = ("LATENT", "count",) + FUNCTION = "merge" + + def merge(self, latents_A: dict, latents_B: dict, merge_strategy: str, scale_method: str, crop: str): + latents = [] + latents_A = latents_A.copy()["samples"] + latents_B = latents_B.copy()["samples"] + + # if not same dimensions, do scaling + if latents_A.shape[3] != latents_B.shape[3] or latents_A.shape[2] != latents_B.shape[2]: + A_size = latents_A.shape[3] * latents_A.shape[2] + B_size = latents_B.shape[3] * latents_B.shape[2] + # determine which to use + use_A_as_template = True + if merge_strategy == MergeStrategies.MATCH_A: + pass + elif merge_strategy == MergeStrategies.MATCH_B: + use_A_as_template = False + elif merge_strategy in (MergeStrategies.MATCH_SMALLER, MergeStrategies.MATCH_LARGER): + if A_size <= B_size: + use_A_as_template = True if merge_strategy == MergeStrategies.MATCH_SMALLER else False + # apply scaling + if use_A_as_template: + latents_B = comfy.utils.common_upscale(latents_B, latents_A.shape[3], latents_A.shape[2], scale_method, crop) + else: + latents_A = comfy.utils.common_upscale(latents_A, latents_B.shape[3], latents_B.shape[2], scale_method, crop) + + latents.append(latents_A) + latents.append(latents_B) + + merged = {"samples": torch.cat(latents, dim=0)} + return (merged, len(merged["samples"]),) + + +class MergeImages: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "images_A": ("IMAGE",), + "images_B": ("IMAGE",), + "merge_strategy": (MergeStrategies.list_all,), + "scale_method": (ScaleMethods.list_all,), + "crop": (CropMethods.list_all,), + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/image" + + RETURN_TYPES = ("IMAGE", "INT",) + RETURN_NAMES = ("IMAGE", "count",) + FUNCTION = "merge" + + def merge(self, images_A: Tensor, images_B: Tensor, merge_strategy: str, scale_method: str, crop: str): + images = [] + # if not same dimensions, do scaling + if images_A.shape[3] != images_B.shape[3] or images_A.shape[2] != images_B.shape[2]: + images_A = images_A.movedim(-1,1) + images_B = images_B.movedim(-1,1) + + A_size = images_A.shape[3] * images_A.shape[2] + B_size = images_B.shape[3] * images_B.shape[2] + # determine which to use + use_A_as_template = True + if merge_strategy == MergeStrategies.MATCH_A: + pass + elif merge_strategy == MergeStrategies.MATCH_B: + use_A_as_template = False + elif merge_strategy in (MergeStrategies.MATCH_SMALLER, MergeStrategies.MATCH_LARGER): + if A_size <= B_size: + use_A_as_template = True if merge_strategy == MergeStrategies.MATCH_SMALLER else False + # apply scaling + if use_A_as_template: + images_B = comfy.utils.common_upscale(images_B, images_A.shape[3], images_A.shape[2], scale_method, crop) + else: + images_A = comfy.utils.common_upscale(images_A, images_B.shape[3], images_B.shape[2], scale_method, crop) + images_A = images_A.movedim(1,-1) + images_B = images_B.movedim(1,-1) + + images.append(images_A) + images.append(images_B) + all_images = torch.cat(images, dim=0) + return (all_images, all_images.size(0),) + + +class MergeMasks: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask_A": ("MASK",), + "mask_B": ("MASK",), + "merge_strategy": (MergeStrategies.list_all,), + "scale_method": (ScaleMethods.list_all,), + "crop": (CropMethods.list_all,), + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/mask" + + RETURN_TYPES = ("MASK", "INT",) + RETURN_NAMES = ("MASK", "count",) + FUNCTION = "merge" + + def merge(self, mask_A: Tensor, mask_B: Tensor, merge_strategy: str, scale_method: str, crop: str): + masks = [] + # if not same dimensions, do scaling + if mask_A.shape[2] != mask_B.shape[2] or mask_A.shape[1] != mask_B.shape[1]: + A_size = mask_A.shape[2] * mask_A.shape[1] + B_size = mask_B.shape[2] * mask_B.shape[1] + # determine which to use + use_A_as_template = True + if merge_strategy == MergeStrategies.MATCH_A: + pass + elif merge_strategy == MergeStrategies.MATCH_B: + use_A_as_template = False + elif merge_strategy in (MergeStrategies.MATCH_SMALLER, MergeStrategies.MATCH_LARGER): + if A_size <= B_size: + use_A_as_template = True if merge_strategy == MergeStrategies.MATCH_SMALLER else False + # add dimension where image channels would be expected to work with common_upscale + mask_A = torch.unsqueeze(mask_A, 1) + mask_B = torch.unsqueeze(mask_B, 1) + # apply scaling + if use_A_as_template: + mask_B = comfy.utils.common_upscale(mask_B, mask_A.shape[3], mask_A.shape[2], scale_method, crop) + else: + mask_A = comfy.utils.common_upscale(mask_A, mask_B.shape[3], mask_B.shape[2], scale_method, crop) + # undo dimension increase + mask_A = torch.squeeze(mask_A, 1) + mask_B = torch.squeeze(mask_B, 1) + + masks.append(mask_A) + masks.append(mask_B) + all_masks = torch.cat(masks, dim=0) + return (all_masks, all_masks.size(0),) + + +class SelectEveryNthLatent: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "latents": ("LATENT",), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/latent" + + RETURN_TYPES = ("LATENT", "INT",) + RETURN_NAMES = ("LATENT", "count",) + FUNCTION = "select_latents" + + def select_latents(self, latents: dict, select_every_nth: int): + sub_latents = latents.copy()["samples"][0::select_every_nth] + return ({"samples": sub_latents}, sub_latents.size(0)) + + +class SelectEveryNthImage: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "images": ("IMAGE",), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/image" + + RETURN_TYPES = ("IMAGE", "INT",) + RETURN_NAMES = ("IMAGE", "count",) + FUNCTION = "select_images" + + def select_images(self, images: Tensor, select_every_nth: int): + sub_images = images[0::select_every_nth] + return (sub_images, sub_images.size(0)) + + +class SelectEveryNthMask: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/mask" + + RETURN_TYPES = ("MASK", "INT",) + RETURN_NAMES = ("MASK", "count",) + FUNCTION = "select_masks" + + def select_masks(self, mask: Tensor, select_every_nth: int): + sub_mask = mask[0::select_every_nth] + return (sub_mask, sub_mask.size(0)) + + +class GetLatentCount: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "latents": ("LATENT",), + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/latent" + + RETURN_TYPES = ("INT",) + RETURN_NAMES = ("count",) + FUNCTION = "count_input" + + def count_input(self, latents: dict): + return (latents["samples"].size(0),) + + +class GetImageCount: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "images": ("IMAGE",), + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/image" + + RETURN_TYPES = ("INT",) + RETURN_NAMES = ("count",) + FUNCTION = "count_input" + + def count_input(self, images: Tensor): + return (images.size(0),) + + +class GetMaskCount: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/mask" + + RETURN_TYPES = ("INT",) + RETURN_NAMES = ("count",) + FUNCTION = "count_input" + + def count_input(self, mask: Tensor): + return (mask.size(0),) + + +class DuplicateLatents: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "latents": ("LATENT",), + "multiply_by": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}) + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/latent" + + RETURN_TYPES = ("LATENT", "INT",) + RETURN_NAMES = ("LATENT", "count",) + FUNCTION = "duplicate_input" + + def duplicate_input(self, latents: dict[str, Tensor], multiply_by: int): + new_latents = latents.copy() + full_latents = [] + for n in range(0, multiply_by): + full_latents.append(new_latents["samples"]) + new_latents["samples"] = torch.cat(full_latents, dim=0) + return (new_latents, new_latents["samples"].size(0),) + + +class DuplicateImages: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "images": ("IMAGE",), + "multiply_by": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}) + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/image" + + RETURN_TYPES = ("IMAGE", "INT",) + RETURN_NAMES = ("IMAGE", "count",) + FUNCTION = "duplicate_input" + + def duplicate_input(self, images: Tensor, multiply_by: int): + full_images = [] + for n in range(0, multiply_by): + full_images.append(images) + new_images = torch.cat(full_images, dim=0) + return (new_images, new_images.size(0),) + + +class DuplicateMasks: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + "multiply_by": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}) + } + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢/mask" + + RETURN_TYPES = ("MASK", "INT",) + RETURN_NAMES = ("MASK", "count",) + FUNCTION = "duplicate_input" + + def duplicate_input(self, mask: Tensor, multiply_by: int): + full_masks = [] + for n in range(0, multiply_by): + full_masks.append(mask) + new_mask = torch.cat(full_masks, dim=0) + return (new_mask, new_mask.size(0),) + + +# class SelectLatents: +# @classmethod +# def INPUT_TYPES(s): +# return { +# "required": { +# "images": ("IMAGE",), +# "select_indeces": ("STRING", {"default": ""}), +# }, +# } diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/load_images_nodes.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/load_images_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..1e708171eed47831215f8d5c299b1e698498ab4b --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/load_images_nodes.py @@ -0,0 +1,157 @@ +import os +import hashlib +import numpy as np +import torch +from PIL import Image, ImageOps + +import folder_paths +from comfy.k_diffusion.utils import FolderOfImages +from .logger import logger +from .utils import BIGMAX, calculate_file_hash, get_sorted_dir_files_from_directory, validate_path + + +def is_changed_load_images(directory: str, image_load_cap: int = 0, skip_first_images: int = 0, select_every_nth: int = 1): + if not os.path.isdir(directory): + return False + + dir_files = get_sorted_dir_files_from_directory(directory, skip_first_images, select_every_nth, FolderOfImages.IMG_EXTENSIONS) + if image_load_cap != 0: + dir_files = dir_files[:image_load_cap] + + m = hashlib.sha256() + for filepath in dir_files: + m.update(calculate_file_hash(filepath).encode()) # strings must be encoded before hashing + return m.digest().hex() + + +def validate_load_images(directory: str): + if not os.path.isdir(directory): + return f"Directory '{directory}' cannot be found." + dir_files = os.listdir(directory) + if len(dir_files) == 0: + return f"No files in directory '{directory}'." + + return True + + +def load_images(directory: str, image_load_cap: int = 0, skip_first_images: int = 0, select_every_nth: int = 1): + if not os.path.isdir(directory): + raise FileNotFoundError(f"Directory '{directory} cannot be found.") + + dir_files = get_sorted_dir_files_from_directory(directory, skip_first_images, select_every_nth, FolderOfImages.IMG_EXTENSIONS) + + if len(dir_files) == 0: + raise FileNotFoundError(f"No files in directory '{directory}'.") + + images = [] + masks = [] + + limit_images = False + if image_load_cap > 0: + limit_images = True + image_count = 0 + loaded_alpha = False + zero_mask = torch.zeros((64,64), dtype=torch.float32, device="cpu") + + for image_path in dir_files: + if limit_images and image_count >= image_load_cap: + break + i = Image.open(image_path) + i = ImageOps.exif_transpose(i) + image = i.convert("RGB") + image = np.array(image).astype(np.float32) / 255.0 + image = torch.from_numpy(image)[None,] + if 'A' in i.getbands(): + mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0 + mask = 1. - torch.from_numpy(mask) + if not loaded_alpha: + loaded_alpha = True + zero_mask = torch.zeros((len(image[0]),len(image[0][0])), dtype=torch.float32, device="cpu") + masks = [zero_mask] * image_count + else: + mask = zero_mask + images.append(image) + masks.append(mask) + image_count += 1 + + if len(images) == 0: + raise FileNotFoundError(f"No images could be loaded from directory '{directory}'.") + + return (torch.cat(images, dim=0), torch.stack(masks, dim=0), image_count) + + +class LoadImagesFromDirectoryUpload: + @classmethod + def INPUT_TYPES(s): + input_dir = folder_paths.get_input_directory() + directories = [] + for item in os.listdir(input_dir): + if not os.path.isfile(os.path.join(input_dir, item)) and item != "clipspace": + directories.append(item) + return { + "required": { + "directory": (directories,), + }, + "optional": { + "image_load_cap": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "skip_first_images": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", "MASK", "INT") + FUNCTION = "load_images" + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + + def load_images(self, directory: str, **kwargs): + directory = folder_paths.get_annotated_filepath(directory.strip()) + return load_images(directory, **kwargs) + + @classmethod + def IS_CHANGED(s, directory: str, **kwargs): + directory = folder_paths.get_annotated_filepath(directory.strip()) + return is_changed_load_images(directory, **kwargs) + + @classmethod + def VALIDATE_INPUTS(s, directory: str, **kwargs): + directory = folder_paths.get_annotated_filepath(directory.strip()) + return validate_load_images(directory) + + +class LoadImagesFromDirectoryPath: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "directory": ("STRING", {"default": "X://path/to/images", "vhs_path_extensions": []}), + }, + "optional": { + "image_load_cap": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "skip_first_images": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + } + } + + RETURN_TYPES = ("IMAGE", "MASK", "INT") + FUNCTION = "load_images" + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + + def load_images(self, directory: str, **kwargs): + if directory is None or validate_load_images(directory) != True: + raise Exception("directory is not valid: " + directory) + + return load_images(directory, **kwargs) + + @classmethod + def IS_CHANGED(s, directory: str, **kwargs): + if directory is None: + return "input" + return is_changed_load_images(directory, **kwargs) + + @classmethod + def VALIDATE_INPUTS(s, directory: str, **kwargs): + if directory is None: + return True + return validate_load_images(directory) diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/load_video_nodes.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/load_video_nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..1292d915a07b9b5185c07d56ea2e492478ea26c7 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/load_video_nodes.py @@ -0,0 +1,236 @@ +import os +import itertools +import numpy as np +import torch +from PIL import Image, ImageOps +import cv2 + +import folder_paths +from comfy.utils import common_upscale +from .logger import logger +from .utils import BIGMAX, DIMMAX, calculate_file_hash, get_sorted_dir_files_from_directory, get_audio, lazy_eval, hash_path, validate_path + + +video_extensions = ['webm', 'mp4', 'mkv', 'gif'] + + +def is_gif(filename) -> bool: + file_parts = filename.split('.') + return len(file_parts) > 1 and file_parts[-1] == "gif" + + +def target_size(width, height, force_size, custom_width, custom_height) -> tuple[int, int]: + if force_size == "Custom": + return (custom_width, custom_height) + elif force_size == "Custom Height": + force_size = "?x"+str(custom_height) + elif force_size == "Custom Width": + force_size = str(custom_width)+"x?" + + if force_size != "Disabled": + force_size = force_size.split("x") + if force_size[0] == "?": + width = (width*int(force_size[1]))//height + #Limit to a multple of 8 for latent conversion + width = int(width)+4 & ~7 + height = int(force_size[1]) + elif force_size[1] == "?": + height = (height*int(force_size[0]))//width + height = int(height)+4 & ~7 + width = int(force_size[0]) + else: + width = int(force_size[0]) + height = int(force_size[1]) + return (width, height) + +def cv_frame_generator(video, force_rate, frame_load_cap, skip_first_frames, + select_every_nth, batch_manager=None, unique_id=None): + try: + video_cap = cv2.VideoCapture(video) + if not video_cap.isOpened(): + raise ValueError(f"{video} could not be loaded with cv.") + # set video_cap to look at start_index frame + total_frame_count = 0 + total_frames_evaluated = -1 + frames_added = 0 + base_frame_time = 1/video_cap.get(cv2.CAP_PROP_FPS) + width = video_cap.get(cv2.CAP_PROP_FRAME_WIDTH) + height = video_cap.get(cv2.CAP_PROP_FRAME_HEIGHT) + prev_frame = None + if force_rate == 0: + target_frame_time = base_frame_time + else: + target_frame_time = 1/force_rate + yield (width, height, target_frame_time) + time_offset=target_frame_time - base_frame_time + while video_cap.isOpened(): + if time_offset < target_frame_time: + is_returned = video_cap.grab() + # if didn't return frame, video has ended + if not is_returned: + break + time_offset += base_frame_time + if time_offset < target_frame_time: + continue + time_offset -= target_frame_time + # if not at start_index, skip doing anything with frame + total_frame_count += 1 + if total_frame_count <= skip_first_frames: + continue + else: + total_frames_evaluated += 1 + + # if should not be selected, skip doing anything with frame + if total_frames_evaluated%select_every_nth != 0: + continue + + # opencv loads images in BGR format (yuck), so need to convert to RGB for ComfyUI use + # follow up: can videos ever have an alpha channel? + # To my testing: No. opencv has no support for alpha + unused, frame = video_cap.retrieve() + frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + # convert frame to comfyui's expected format + # TODO: frame contains no exif information. Check if opencv2 has already applied + frame = np.array(frame, dtype=np.float32) / 255.0 + if prev_frame is not None: + inp = yield prev_frame + if inp is not None: + #ensure the finally block is called + return + prev_frame = frame + frames_added += 1 + # if cap exists and we've reached it, stop processing frames + if frame_load_cap > 0 and frames_added >= frame_load_cap: + break + if batch_manager is not None: + batch_manager.inputs.pop(unique_id) + batch_manager.has_closed_inputs = True + if prev_frame is not None: + yield prev_frame + finally: + video_cap.release() + +def load_video_cv(video: str, force_rate: int, force_size: str, + custom_width: int,custom_height: int, frame_load_cap: int, + skip_first_frames: int, select_every_nth: int, + batch_manager=None, unique_id=None): + if batch_manager is None or unique_id not in batch_manager.inputs: + gen = cv_frame_generator(video, force_rate, frame_load_cap, skip_first_frames, + select_every_nth, batch_manager, unique_id) + (width, height, target_frame_time) = next(gen) + width = int(width) + height = int(height) + if batch_manager is not None: + batch_manager.inputs[unique_id] = (gen, width, height, target_frame_time) + else: + (gen, width, height, target_frame_time) = batch_manager.inputs[unique_id] + if batch_manager is not None: + gen = itertools.islice(gen, batch_manager.frames_per_batch) + + #Some minor wizardry to eliminate a copy and reduce max memory by a factor of ~2 + images = torch.from_numpy(np.fromiter(gen, np.dtype((np.float32, (height, width, 3))))) + if len(images) == 0: + raise RuntimeError("No frames generated") + if force_size != "Disabled": + new_size = target_size(width, height, force_size, custom_width, custom_height) + if new_size[0] != width or new_size[1] != height: + s = images.movedim(-1,1) + s = common_upscale(s, new_size[0], new_size[1], "lanczos", "center") + images = s.movedim(1,-1) + + #Setup lambda for lazy audio capture + audio = lambda : get_audio(video, skip_first_frames * target_frame_time, + frame_load_cap*target_frame_time*select_every_nth) + return (images, len(images), lazy_eval(audio)) + + +class LoadVideoUpload: + @classmethod + def INPUT_TYPES(s): + input_dir = folder_paths.get_input_directory() + files = [] + for f in os.listdir(input_dir): + if os.path.isfile(os.path.join(input_dir, f)): + file_parts = f.split('.') + if len(file_parts) > 1 and (file_parts[-1] in video_extensions): + files.append(f) + return {"required": { + "video": (sorted(files),), + "force_rate": ("INT", {"default": 0, "min": 0, "max": 60, "step": 1}), + "force_size": (["Disabled", "Custom Height", "Custom Width", "Custom", "256x?", "?x256", "256x256", "512x?", "?x512", "512x512"],), + "custom_width": ("INT", {"default": 512, "min": 0, "max": DIMMAX, "step": 8}), + "custom_height": ("INT", {"default": 512, "min": 0, "max": DIMMAX, "step": 8}), + "frame_load_cap": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "skip_first_frames": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + }, + "optional": { + "batch_manager": ("VHS_BatchManager",) + }, + "hidden": { + "unique_id": "UNIQUE_ID" + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + + RETURN_TYPES = ("IMAGE", "INT", "VHS_AUDIO", ) + RETURN_NAMES = ("IMAGE", "frame_count", "audio",) + FUNCTION = "load_video" + + def load_video(self, **kwargs): + kwargs['video'] = folder_paths.get_annotated_filepath(kwargs['video'].strip("\"")) + return load_video_cv(**kwargs) + + @classmethod + def IS_CHANGED(s, video, **kwargs): + image_path = folder_paths.get_annotated_filepath(video) + return calculate_file_hash(image_path) + + @classmethod + def VALIDATE_INPUTS(s, video, force_size, **kwargs): + if not folder_paths.exists_annotated_filepath(video): + return "Invalid video file: {}".format(video) + return True + + +class LoadVideoPath: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "video": ("STRING", {"default": "X://insert/path/here.mp4", "vhs_path_extensions": video_extensions}), + "force_rate": ("INT", {"default": 0, "min": 0, "max": 60, "step": 1}), + "force_size": (["Disabled", "Custom Height", "Custom Width", "Custom", "256x?", "?x256", "256x256", "512x?", "?x512", "512x512"],), + "custom_width": ("INT", {"default": 512, "min": 0, "max": DIMMAX, "step": 8}), + "custom_height": ("INT", {"default": 512, "min": 0, "max": DIMMAX, "step": 8}), + "frame_load_cap": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "skip_first_frames": ("INT", {"default": 0, "min": 0, "max": BIGMAX, "step": 1}), + "select_every_nth": ("INT", {"default": 1, "min": 1, "max": BIGMAX, "step": 1}), + }, + "optional": { + "batch_manager": ("VHS_BatchManager",) + }, + "hidden": { + "unique_id": "UNIQUE_ID" + }, + } + + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + + RETURN_TYPES = ("IMAGE", "INT", "VHS_AUDIO", ) + RETURN_NAMES = ("IMAGE", "frame_count", "audio",) + FUNCTION = "load_video" + + def load_video(self, **kwargs): + if kwargs['video'] is None or validate_path(kwargs['video']) != True: + raise Exception("video is not a valid path: " + kwargs['video']) + return load_video_cv(**kwargs) + + @classmethod + def IS_CHANGED(s, video, **kwargs): + return hash_path(video) + + @classmethod + def VALIDATE_INPUTS(s, video, **kwargs): + return validate_path(video, allow_none=True) diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/logger.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/logger.py new file mode 100755 index 0000000000000000000000000000000000000000..6e7b8d64bda275608ba6bf8ee28d2a2112e3e2be --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/logger.py @@ -0,0 +1,36 @@ +import sys +import copy +import logging + + +class ColoredFormatter(logging.Formatter): + COLORS = { + "DEBUG": "\033[0;36m", # CYAN + "INFO": "\033[0;32m", # GREEN + "WARNING": "\033[0;33m", # YELLOW + "ERROR": "\033[0;31m", # RED + "CRITICAL": "\033[0;37;41m", # WHITE ON RED + "RESET": "\033[0m", # RESET COLOR + } + + def format(self, record): + colored_record = copy.copy(record) + levelname = colored_record.levelname + seq = self.COLORS.get(levelname, self.COLORS["RESET"]) + colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}" + return super().format(colored_record) + + +# Create a new logger +logger = logging.getLogger("VideoHelperSuite") +logger.propagate = False + +# Add handler if we don't have one. +if not logger.handlers: + handler = logging.StreamHandler(sys.stdout) + handler.setFormatter(ColoredFormatter("[%(name)s] - %(levelname)s - %(message)s")) + logger.addHandler(handler) + +# Configure logger +loglevel = logging.INFO +logger.setLevel(loglevel) diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..e9b13437c59c0aaaad1c6426254351c8cec85584 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py @@ -0,0 +1,620 @@ +import os +import sys +import json +import subprocess +import numpy as np +import re +import datetime +from typing import List +from PIL import Image, ExifTags +from PIL.PngImagePlugin import PngInfo +from pathlib import Path + +import folder_paths +from .logger import logger +from .image_latent_nodes import * +from .load_video_nodes import LoadVideoUpload, LoadVideoPath +from .load_images_nodes import LoadImagesFromDirectoryUpload, LoadImagesFromDirectoryPath +from .batched_nodes import VAEEncodeBatched, VAEDecodeBatched +from .utils import ffmpeg_path, get_audio, hash_path, validate_path, requeue_workflow, gifski_path + +folder_paths.folder_names_and_paths["VHS_video_formats"] = ( + [ + os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "video_formats"), + ], + [".json"] +) + +def gen_format_widgets(video_format): + for k in video_format: + if k.endswith("_pass"): + for i in range(len(video_format[k])): + if isinstance(video_format[k][i], list): + item = [video_format[k][i]] + yield item + video_format[k][i] = item[0] + else: + if isinstance(video_format[k], list): + item = [video_format[k]] + yield item + video_format[k] = item[0] + +def get_video_formats(): + formats = [] + for format_name in folder_paths.get_filename_list("VHS_video_formats"): + format_name = format_name[:-5] + video_format_path = folder_paths.get_full_path("VHS_video_formats", format_name + ".json") + with open(video_format_path, 'r') as stream: + video_format = json.load(stream) + if "gifski_pass" in video_format and gifski_path is None: + #Skip format + continue + widgets = [w[0] for w in gen_format_widgets(video_format)] + if (len(widgets) > 0): + formats.append(["video/" + format_name, widgets]) + else: + formats.append("video/" + format_name) + return formats + +def get_format_widget_defaults(format_name): + video_format_path = folder_paths.get_full_path("VHS_video_formats", format_name + ".json") + with open(video_format_path, 'r') as stream: + video_format = json.load(stream) + results = {} + for w in gen_format_widgets(video_format): + if len(w[0]) > 2 and 'default' in w[0][2]: + default = w[0][2]['default'] + else: + if type(w[0][1]) is list: + default = w[0][1][0] + else: + #NOTE: This doesn't respect max/min, but should be good enough as a fallback to a fallback to a fallback + default = {"BOOLEAN": False, "INT": 0, "FLOAT": 0, "STRING": ""}[w[0][1]] + results[w[0][0]] = default + return results + + +def apply_format_widgets(format_name, kwargs): + video_format_path = folder_paths.get_full_path("VHS_video_formats", format_name + ".json") + with open(video_format_path, 'r') as stream: + video_format = json.load(stream) + for w in gen_format_widgets(video_format): + assert(w[0][0] in kwargs) + w[0] = str(kwargs[w[0][0]]) + return video_format + +def tensor_to_int(tensor, bits): + #TODO: investigate benefit of rounding by adding 0.5 before clip/cast + tensor = tensor.cpu().numpy() * (2**bits-1) + return np.clip(tensor, 0, (2**bits-1)) +def tensor_to_shorts(tensor): + return tensor_to_int(tensor, 16).astype(np.uint16) +def tensor_to_bytes(tensor): + return tensor_to_int(tensor, 8).astype(np.uint8) + +def ffmpeg_process(args, video_format, video_metadata, file_path, env): + + res = None + frame_data = yield + if video_format.get('save_metadata', 'False') != 'False': + os.makedirs(folder_paths.get_temp_directory(), exist_ok=True) + metadata = json.dumps(video_metadata) + metadata_path = os.path.join(folder_paths.get_temp_directory(), "metadata.txt") + #metadata from file should escape = ; # \ and newline + metadata = metadata.replace("\\","\\\\") + metadata = metadata.replace(";","\\;") + metadata = metadata.replace("#","\\#") + metadata = metadata.replace("=","\\=") + metadata = metadata.replace("\n","\\\n") + metadata = "comment=" + metadata + with open(metadata_path, "w") as f: + f.write(";FFMETADATA1\n") + f.write(metadata) + m_args = args[:1] + ["-i", metadata_path] + args[1:] + ["-metadata", "creation_time=now"] + with subprocess.Popen(m_args + [file_path], stderr=subprocess.PIPE, + stdin=subprocess.PIPE, env=env) as proc: + try: + while frame_data is not None: + proc.stdin.write(frame_data) + #TODO: skip flush for increased speed + proc.stdin.flush() + frame_data = yield + proc.stdin.close() + res = proc.stderr.read() + except BrokenPipeError as e: + err = proc.stderr.read() + #Check if output file exists. If it does, the re-execution + #will also fail. This obscures the cause of the error + #and seems to never occur concurrent to the metadata issue + if os.path.exists(file_path): + raise Exception("An error occured in the ffmpeg subprocess:\n" \ + + err.decode("utf-8")) + #Res was not set + print(err.decode("utf-8"), end="", file=sys.stderr) + logger.warn("An error occurred when saving with metadata") + if res != b'': + with subprocess.Popen(args + [file_path], stderr=subprocess.PIPE, + stdin=subprocess.PIPE, env=env) as proc: + try: + while frame_data is not None: + proc.stdin.write(frame_data) + proc.stdin.flush() + frame_data = yield + proc.stdin.close() + res = proc.stderr.read() + except BrokenPipeError as e: + res = proc.stderr.read() + raise Exception("An error occured in the ffmpeg subprocess:\n" \ + + res.decode("utf-8")) + if len(res) > 0: + print(res.decode("utf-8"), end="", file=sys.stderr) + +class VideoCombine: + @classmethod + def INPUT_TYPES(s): + #Hide ffmpeg formats if ffmpeg isn't available + if ffmpeg_path is not None: + ffmpeg_formats = get_video_formats() + else: + ffmpeg_formats = [] + return { + "required": { + "images": ("IMAGE",), + "frame_rate": ( + "INT", + {"default": 8, "min": 1, "step": 1}, + ), + "loop_count": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), + "filename_prefix": ("STRING", {"default": "AnimateDiff"}), + "format": (["image/gif", "image/webp"] + ffmpeg_formats,), + "pingpong": ("BOOLEAN", {"default": False}), + "save_output": ("BOOLEAN", {"default": True}), + }, + "optional": { + "audio": ("VHS_AUDIO",), + "batch_manager": ("VHS_BatchManager",) + }, + "hidden": { + "prompt": "PROMPT", + "extra_pnginfo": "EXTRA_PNGINFO", + "unique_id": "UNIQUE_ID" + }, + } + + RETURN_TYPES = ("VHS_FILENAMES",) + RETURN_NAMES = ("Filenames",) + OUTPUT_NODE = True + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + FUNCTION = "combine_video" + + def combine_video( + self, + images, + frame_rate: int, + loop_count: int, + filename_prefix="AnimateDiff", + format="image/gif", + pingpong=False, + save_output=True, + prompt=None, + extra_pnginfo=None, + audio=None, + unique_id=None, + manual_format_widgets=None, + batch_manager=None + ): + # get output information + output_dir = ( + folder_paths.get_output_directory() + if save_output + else folder_paths.get_temp_directory() + ) + ( + full_output_folder, + filename, + _, + subfolder, + _, + ) = folder_paths.get_save_image_path(filename_prefix, output_dir) + output_files = [] + + metadata = PngInfo() + video_metadata = {} + if prompt is not None: + metadata.add_text("prompt", json.dumps(prompt)) + video_metadata["prompt"] = prompt + if extra_pnginfo is not None: + for x in extra_pnginfo: + metadata.add_text(x, json.dumps(extra_pnginfo[x])) + video_metadata[x] = extra_pnginfo[x] + metadata.add_text("CreationTime", datetime.datetime.now().isoformat(" ")[:19]) + + if batch_manager is not None and unique_id in batch_manager.outputs: + (counter, output_process) = batch_manager.outputs[unique_id] + else: + # comfy counter workaround + max_counter = 0 + + # Loop through the existing files + matcher = re.compile(f"{re.escape(filename)}_(\d+)\D*\..+") + for existing_file in os.listdir(full_output_folder): + # Check if the file matches the expected format + match = matcher.fullmatch(existing_file) + if match: + # Extract the numeric portion of the filename + file_counter = int(match.group(1)) + # Update the maximum counter value if necessary + if file_counter > max_counter: + max_counter = file_counter + + # Increment the counter by 1 to get the next available value + counter = max_counter + 1 + output_process = None + + # save first frame as png to keep metadata + file = f"{filename}_{counter:05}.png" + file_path = os.path.join(full_output_folder, file) + Image.fromarray(tensor_to_bytes(images[0])).save( + file_path, + pnginfo=metadata, + compress_level=4, + ) + output_files.append(file_path) + + format_type, format_ext = format.split("/") + if format_type == "image": + if batch_manager is not None: + raise Exception("Pillow('image/') formats are not compatible with batched output") + image_kwargs = {} + if format_ext == "gif": + image_kwargs['disposal'] = 2 + if format_ext == "webp": + #Save timestamp information + exif = Image.Exif() + exif[ExifTags.IFD.Exif] = {36867: datetime.datetime.now().isoformat(" ")[:19]} + image_kwargs['exif'] = exif + file = f"{filename}_{counter:05}.{format_ext}" + file_path = os.path.join(full_output_folder, file) + images = tensor_to_bytes(images) + if pingpong: + images = np.concatenate((images, images[-2:0:-1])) + frames = [Image.fromarray(f) for f in images] + # Use pillow directly to save an animated image + frames[0].save( + file_path, + format=format_ext.upper(), + save_all=True, + append_images=frames[1:], + duration=round(1000 / frame_rate), + loop=loop_count, + compress_level=4, + **image_kwargs + ) + output_files.append(file_path) + else: + # Use ffmpeg to save a video + if ffmpeg_path is None: + #Should never be reachable + raise ProcessLookupError("Could not find ffmpeg") + + #Acquire additional format_widget values + kwargs = None + if manual_format_widgets is None: + if prompt is not None: + kwargs = prompt[unique_id]['inputs'] + else: + manual_format_widgets = {} + if kwargs is None: + kwargs = get_format_widget_defaults(format_ext) + missing = {} + for k in kwargs.keys(): + if k in manual_format_widgets: + kwargs[k] = manual_format_widgets[k] + else: + missing[k] = kwargs[k] + if len(missing) > 0: + logger.warn("Extra format values were not provided, the following defaults will be used: " + str(kwargs) + "\nThis is likely due to usage of ComfyUI-to-python. These values can be manually set by supplying a manual_format_widgets argument") + + video_format = apply_format_widgets(format_ext, kwargs) + if video_format.get('input_color_depth', '8bit') == '16bit': + images = tensor_to_shorts(images) + if images.shape[-1] == 4: + i_pix_fmt = 'rgba64' + else: + i_pix_fmt = 'rgb48' + else: + images = tensor_to_bytes(images) + if images.shape[-1] == 4: + i_pix_fmt = 'rgba' + else: + i_pix_fmt = 'rgb24' + if pingpong: + if batch_manager is not None: + logger.error("pingpong is incompatible with batched output") + images = np.concatenate((images, images[-2:0:-1])) + file = f"{filename}_{counter:05}.{video_format['extension']}" + file_path = os.path.join(full_output_folder, file) + dimensions = f"{len(images[0][0])}x{len(images[0])}" + loop_args = ["-vf", "loop=loop=" + str(loop_count)+":size=" + str(len(images))] + bitrate_arg = [] + bitrate = video_format.get('bitrate') + if bitrate is not None: + bitrate_arg = ["-b:v", str(bitrate) + "M" if video_format.get('megabit') == 'True' else str(bitrate) + "K"] + args = [ffmpeg_path, "-v", "error", "-f", "rawvideo", "-pix_fmt", i_pix_fmt, + "-s", dimensions, "-r", str(frame_rate), "-i", "-"] \ + + loop_args + video_format['main_pass'] + bitrate_arg + + env=os.environ.copy() + if "environment" in video_format: + env.update(video_format["environment"]) + + if output_process is None: + output_process = ffmpeg_process(args, video_format, video_metadata, file_path, env) + #Proceed to first yield + output_process.send(None) + if batch_manager is not None: + batch_manager.outputs[unique_id] = (counter, output_process) + + output_process.send(images.tobytes()) + if batch_manager is not None: + requeue_workflow((batch_manager.unique_id, not batch_manager.has_closed_inputs)) + if batch_manager is None or batch_manager.has_closed_inputs: + #Close pipe and wait for termination. + try: + output_process.send(None) + except StopIteration: + pass + if batch_manager is not None: + batch_manager.outputs.pop(unique_id) + if len(batch_manager.outputs) == 0: + batch_manager.reset() + else: + #batch is unfinished + #TODO: Check if empty output breaks other custom nodes + return {"ui": {"unfinished_batch": [True]}, "result": ((save_output, []),)} + + output_files.append(file_path) + + if "gifski_pass" in video_format: + gif_output = f"{filename}_{counter:05}.gif" + gif_output_path = os.path.join( full_output_folder, gif_output) + gifski_args = [gifski_path] + video_format["gifski_pass"] \ + + ["-o", gif_output_path, file_path] + try: + res = subprocess.run(gifski_args, env=env, check=True, capture_output=True) + except subprocess.CalledProcessError as e: + raise Exception("An error occured in the gifski subprocess:\n" \ + + e.stderr.decode("utf-8")) + if res.stderr: + print(res.stderr.decode("utf-8"), end="", file=sys.stderr) + #output format is actually an image and should be correctly marked + #TODO: Evaluate a more consistent solution for this + format = "image/gif" + output_files.append(gif_output_path) + file = gif_output + + elif audio is not None and audio() is not False: + # Create audio file if input was provided + output_file_with_audio = f"{filename}_{counter:05}-audio.{video_format['extension']}" + output_file_with_audio_path = os.path.join(full_output_folder, output_file_with_audio) + if "audio_pass" not in video_format: + logger.warn("Selected video format does not have explicit audio support") + video_format["audio_pass"] = ["-c:a", "libopus"] + + + # FFmpeg command with audio re-encoding + #TODO: expose audio quality options if format widgets makes it in + #Reconsider forcing apad/shortest + mux_args = [ffmpeg_path, "-v", "error", "-n", "-i", file_path, + "-i", "-", "-c:v", "copy"] \ + + video_format["audio_pass"] \ + + ["-af", "apad", "-shortest", output_file_with_audio_path] + + try: + res = subprocess.run(mux_args, input=audio(), env=env, + capture_output=True, check=True) + except subprocess.CalledProcessError as e: + raise Exception("An error occured in the ffmpeg subprocess:\n" \ + + e.stderr.decode("utf-8")) + if res.stderr: + print(res.stderr.decode("utf-8"), end="", file=sys.stderr) + output_files.append(output_file_with_audio_path) + #Return this file with audio to the webui. + #It will be muted unless opened or saved with right click + file = output_file_with_audio + + previews = [ + { + "filename": file, + "subfolder": subfolder, + "type": "output" if save_output else "temp", + "format": format, + } + ] + return {"ui": {"gifs": previews}, "result": ((save_output, output_files),)} + @classmethod + def VALIDATE_INPUTS(self, format, **kwargs): + return True + +class LoadAudio: + @classmethod + def INPUT_TYPES(s): + #Hide ffmpeg formats if ffmpeg isn't available + return { + "required": { + "audio_file": ("STRING", {"default": "input/", "vhs_path_extensions": ['wav','mp3','ogg','m4a','flac']}), + }, + "optional" : {"seek_seconds": ("FLOAT", {"default": 0, "min": 0})} + } + + RETURN_TYPES = ("VHS_AUDIO",) + RETURN_NAMES = ("audio",) + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + FUNCTION = "load_audio" + def load_audio(self, audio_file, seek_seconds): + if audio_file is None or validate_path(audio_file) != True: + raise Exception("audio_file is not a valid path: " + audio_file) + #Eagerly fetch the audio since the user must be using it if the + #node executes, unlike Load Video + audio = get_audio(audio_file, start_time=seek_seconds) + return (lambda : audio,) + + @classmethod + def IS_CHANGED(s, audio_file, seek_seconds): + return hash_path(audio_file) + + @classmethod + def VALIDATE_INPUTS(s, audio_file, **kwargs): + return validate_path(audio_file, allow_none=True) + +class PruneOutputs: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "filenames": ("VHS_FILENAMES",), + "options": (["Intermediate", "Intermediate and Utility"],) + } + } + + RETURN_TYPES = () + OUTPUT_NODE = True + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + FUNCTION = "prune_outputs" + + def prune_outputs(self, filenames, options): + if len(filenames[1]) == 0: + return () + assert(len(filenames[1]) <= 3 and len(filenames[1]) >= 2) + delete_list = [] + if options in ["Intermediate", "Intermediate and Utility", "All"]: + delete_list += filenames[1][1:-1] + if options in ["Intermediate and Utility", "All"]: + delete_list.append(filenames[1][0]) + if options in ["All"]: + delete_list.append(filenames[1][-1]) + + output_dirs = [os.path.abspath("output"), os.path.abspath("temp")] + for file in delete_list: + #Check that path is actually an output directory + if (os.path.commonpath([output_dirs[0], file]) != output_dirs[0]) \ + and (os.path.commonpath([output_dirs[1], file]) != output_dirs[1]): + raise Exception("Tried to prune output from invalid directory: " + file) + if os.path.exists(file): + os.remove(file) + return () + +class BatchManager: + def __init__(self, frames_per_batch=-1): + self.frames_per_batch = frames_per_batch + self.inputs = {} + self.outputs = {} + self.unique_id = None + self.has_closed_inputs = False + def reset(self): + self.close_inputs() + for key in self.outputs: + if getattr(self.outputs[key][-1], "gi_suspended", False): + try: + self.outputs[key][-1].send(None) + except StopIteration: + pass + self.__init__(self.frames_per_batch) + def has_open_inputs(self): + return len(self.inputs) > 0 + def close_inputs(self): + for key in self.inputs: + if getattr(self.inputs[key][-1], "gi_suspended", False): + try: + self.inputs[key][-1].send(1) + except StopIteration: + pass + self.inputs = {} + + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "frames_per_batch": ("INT", {"default": 16, "min": 1, "max": 128, "step": 1}) + }, + "hidden": { + "prompt": "PROMPT", + "unique_id": "UNIQUE_ID" + }, + } + + RETURN_TYPES = ("VHS_BatchManager",) + CATEGORY = "Video Helper Suite 🎥🅥🅗🅢" + FUNCTION = "update_batch" + + def update_batch(self, frames_per_batch, prompt=None, unique_id=None): + if unique_id is not None and prompt is not None: + requeue = prompt[unique_id]['inputs'].get('requeue', 0) + else: + requeue = 0 + if requeue == 0: + self.reset() + self.frames_per_batch = frames_per_batch + self.unique_id = unique_id + #onExecuted seems to not be called unless some message is sent + return (self,) + + +NODE_CLASS_MAPPINGS = { + "VHS_VideoCombine": VideoCombine, + "VHS_LoadVideo": LoadVideoUpload, + "VHS_LoadVideoPath": LoadVideoPath, + "VHS_LoadImages": LoadImagesFromDirectoryUpload, + "VHS_LoadImagesPath": LoadImagesFromDirectoryPath, + "VHS_LoadAudio": LoadAudio, + "VHS_PruneOutputs": PruneOutputs, + "VHS_BatchManager": BatchManager, + # Latent and Image nodes + "VHS_SplitLatents": SplitLatents, + "VHS_SplitImages": SplitImages, + "VHS_SplitMasks": SplitMasks, + "VHS_MergeLatents": MergeLatents, + "VHS_MergeImages": MergeImages, + "VHS_MergeMasks": MergeMasks, + "VHS_SelectEveryNthLatent": SelectEveryNthLatent, + "VHS_SelectEveryNthImage": SelectEveryNthImage, + "VHS_SelectEveryNthMask": SelectEveryNthMask, + "VHS_GetLatentCount": GetLatentCount, + "VHS_GetImageCount": GetImageCount, + "VHS_GetMaskCount": GetMaskCount, + "VHS_DuplicateLatents": DuplicateLatents, + "VHS_DuplicateImages": DuplicateImages, + "VHS_DuplicateMasks": DuplicateMasks, + # Batched Nodes + "VHS_VAEEncodeBatched": VAEEncodeBatched, + "VHS_VAEDecodeBatched": VAEDecodeBatched, +} +NODE_DISPLAY_NAME_MAPPINGS = { + "VHS_VideoCombine": "Video Combine 🎥🅥🅗🅢", + "VHS_LoadVideo": "Load Video (Upload) 🎥🅥🅗🅢", + "VHS_LoadVideoPath": "Load Video (Path) 🎥🅥🅗🅢", + "VHS_LoadImages": "Load Images (Upload) 🎥🅥🅗🅢", + "VHS_LoadImagesPath": "Load Images (Path) 🎥🅥🅗🅢", + "VHS_LoadAudio": "Load Audio (Path)🎥🅥🅗🅢", + "VHS_PruneOutputs": "Prune Outputs 🎥🅥🅗🅢", + "VHS_BatchManager": "Batch Manager 🎥🅥🅗🅢", + # Latent and Image nodes + "VHS_SplitLatents": "Split Latent Batch 🎥🅥🅗🅢", + "VHS_SplitImages": "Split Image Batch 🎥🅥🅗🅢", + "VHS_SplitMasks": "Split Mask Batch 🎥🅥🅗🅢", + "VHS_MergeLatents": "Merge Latent Batches 🎥🅥🅗🅢", + "VHS_MergeImages": "Merge Image Batches 🎥🅥🅗🅢", + "VHS_MergeMasks": "Merge Mask Batches 🎥🅥🅗🅢", + "VHS_SelectEveryNthLatent": "Select Every Nth Latent 🎥🅥🅗🅢", + "VHS_SelectEveryNthImage": "Select Every Nth Image 🎥🅥🅗🅢", + "VHS_SelectEveryNthMask": "Select Every Nth Mask 🎥🅥🅗🅢", + "VHS_GetLatentCount": "Get Latent Count 🎥🅥🅗🅢", + "VHS_GetImageCount": "Get Image Count 🎥🅥🅗🅢", + "VHS_GetMaskCount": "Get Mask Count 🎥🅥🅗🅢", + "VHS_DuplicateLatents": "Duplicate Latent Batch 🎥🅥🅗🅢", + "VHS_DuplicateImages": "Duplicate Image Batch 🎥🅥🅗🅢", + "VHS_DuplicateMasks": "Duplicate Mask Batch 🎥🅥🅗🅢", + # Batched Nodes + "VHS_VAEEncodeBatched": "VAE Encode Batched 🎥🅥🅗🅢", + "VHS_VAEDecodeBatched": "VAE Decode Batched 🎥🅥🅗🅢", +} diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/server.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/server.py new file mode 100755 index 0000000000000000000000000000000000000000..24ef45d0172ae9e5bc601a2c64941d01fc786e51 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/server.py @@ -0,0 +1,157 @@ +import server +import folder_paths +import os +import time +import subprocess +from .utils import is_url, get_sorted_dir_files_from_directory, ffmpeg_path, validate_sequence +from comfy.k_diffusion.utils import FolderOfImages + +web = server.web + +def is_safe(path): + if "VHS_STRICT_PATHS" not in os.environ: + return True + basedir = os.path.abspath('.') + try: + common_path = os.path.commonpath([basedir, path]) + except: + #Different drive on windows + return False + return common_path == basedir + +@server.PromptServer.instance.routes.get("/viewvideo") +async def view_video(request): + query = request.rel_url.query + if "filename" not in query: + return web.Response(status=404) + filename = query["filename"] + + #Path code misformats urls on windows and must be skipped + if is_url(filename): + file = filename + else: + filename, output_dir = folder_paths.annotated_filepath(filename) + + type = request.rel_url.query.get("type", "output") + if type == "path": + #special case for path_based nodes + #NOTE: output_dir may be empty, but non-None + output_dir, filename = os.path.split(filename) + if output_dir is None: + output_dir = folder_paths.get_directory_by_type(type) + + if output_dir is None: + return web.Response(status=400) + + if not is_safe(output_dir): + return web.Response(status=403) + + if "subfolder" in request.rel_url.query: + output_dir = os.path.join(output_dir, request.rel_url.query["subfolder"]) + + filename = os.path.basename(filename) + file = os.path.join(output_dir, filename) + + if query.get('format', 'video') == 'folder': + if not os.path.isdir(file): + return web.Response(status=404) + else: + if not os.path.isfile(file) and not validate_sequence(file): + return web.Response(status=404) + + if query.get('format', 'video') == "folder": + #Check that folder contains some valid image file, get it's extension + #ffmpeg seems to not support list globs, so support for mixed extensions seems unfeasible + os.makedirs(folder_paths.get_temp_directory(), exist_ok=True) + concat_file = os.path.join(folder_paths.get_temp_directory(), "image_sequence_preview.txt") + skip_first_images = int(query.get('skip_first_images', 0)) + select_every_nth = int(query.get('select_every_nth', 1)) + valid_images = get_sorted_dir_files_from_directory(file, skip_first_images, select_every_nth, FolderOfImages.IMG_EXTENSIONS) + if len(valid_images) == 0: + return web.Response(status=400) + with open(concat_file, "w") as f: + f.write("ffconcat version 1.0\n") + for path in valid_images: + f.write("file '" + os.path.abspath(path) + "'\n") + f.write("duration 0.125\n") + in_args = ["-safe", "0", "-i", concat_file] + else: + in_args = ["-an", "-i", file] + + args = [ffmpeg_path, "-v", "error"] + in_args + vfilters = [] + if int(query.get('force_rate',0)) != 0: + vfilters.append("fps=fps="+query['force_rate'] + ":round=up:start_time=0.001") + if int(query.get('skip_first_frames', 0)) > 0: + vfilters.append(f"select=gt(n\\,{int(query['skip_first_frames'])-1})") + if int(query.get('select_every_nth', 1)) > 1: + vfilters.append(f"select=not(mod(n\\,{query['select_every_nth']}))") + if query.get('force_size','Disabled') != "Disabled": + size = query['force_size'].split('x') + if size[0] == '?' or size[1] == '?': + size[0] = "-2" if size[0] == '?' else f"'min({size[0]},iw)'" + size[1] = "-2" if size[1] == '?' else f"'min({size[1]},ih)'" + else: + #Aspect ratio is likely changed. A more complex command is required + #to crop the output to the new aspect ratio + ar = float(size[0])/float(size[1]) + vfilters.append(f"crop=if(gt({ar}\\,a)\\,iw\\,ih*{ar}):if(gt({ar}\\,a)\\,iw/{ar}\\,ih)") + size = ':'.join(size) + vfilters.append(f"scale={size}") + vfilters.append("setpts=PTS-STARTPTS") + if len(vfilters) > 0: + args += ["-vf", ",".join(vfilters)] + if int(query.get('frame_load_cap', 0)) > 0: + args += ["-frames:v", query['frame_load_cap']] + #TODO:reconsider adding high frame cap/setting default frame cap on node + + args += ['-c:v', 'libvpx-vp9','-deadline', 'realtime', '-cpu-used', '8', '-f', 'webm', '-'] + + try: + with subprocess.Popen(args, stdout=subprocess.PIPE) as proc: + try: + resp = web.StreamResponse() + resp.content_type = 'video/webm' + resp.headers["Content-Disposition"] = f"filename=\"{filename}\"" + await resp.prepare(request) + while True: + bytes_read = proc.stdout.read() + if bytes_read is None: + #TODO: check for timeout here + time.sleep(.1) + continue + if len(bytes_read) == 0: + break + await resp.write(bytes_read) + except ConnectionResetError as e: + #Kill ffmpeg before stdout closes + proc.kill() + except BrokenPipeError as e: + pass + return resp + +@server.PromptServer.instance.routes.get("/getpath") +async def get_path(request): + query = request.rel_url.query + if "path" not in query: + return web.Response(status=404) + path = os.path.abspath(query["path"]) + + if not os.path.exists(path) or not is_safe(path): + return web.json_response([]) + + #Use get so None is default instead of keyerror + valid_extensions = query.get("extensions") + valid_items = [] + for item in os.scandir(path): + try: + if item.is_dir(): + valid_items.append(item.name + "/") + continue + if valid_extensions is None or item.name.split(".")[-1] in valid_extensions: + valid_items.append(item.name) + except OSError: + #Broken symlinks can throw a very unhelpful "Invalid argument" + pass + + return web.json_response(valid_items) diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/utils.py b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..3a8652044df2016f76c0a4d998d7ef996de500f0 --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/utils.py @@ -0,0 +1,207 @@ +import hashlib +import os +from typing import Iterable +import shutil +import subprocess +import re + +import server +from .logger import logger + +BIGMIN = -(2**53-1) +BIGMAX = (2**53-1) + +DIMMAX = 8192 + +def ffmpeg_suitability(path): + try: + version = subprocess.run([path, "-version"], check=True, + capture_output=True).stdout.decode("utf-8") + except: + return 0 + score = 0 + #rough layout of the importance of various features + simple_criterion = [("libvpx", 20),("264",10), ("265",3), + ("svtav1",5),("libopus", 1)] + for criterion in simple_criterion: + if version.find(criterion[0]) >= 0: + score += criterion[1] + #obtain rough compile year from copyright information + copyright_index = version.find('2000-2') + if copyright_index >= 0: + copyright_year = version[copyright_index+6:copyright_index+9] + if copyright_year.isnumeric(): + score += int(copyright_year) + return score + + +if "VHS_FORCE_FFMPEG_PATH" in os.environ: + ffmpeg_path = os.environ.get("VHS_FORCE_FFMPEG_PATH") +else: + ffmpeg_paths = [] + try: + from imageio_ffmpeg import get_ffmpeg_exe + imageio_ffmpeg_path = get_ffmpeg_exe() + ffmpeg_paths.append(imageio_ffmpeg_path) + except: + if "VHS_USE_IMAGEIO_FFMPEG" in os.environ: + raise + logger.warn("Failed to import imageio_ffmpeg") + if "VHS_USE_IMAGEIO_FFMPEG" in os.environ: + ffmpeg_path = imageio_ffmpeg_path + else: + system_ffmpeg = shutil.which("ffmpeg") + if system_ffmpeg is not None: + ffmpeg_paths.append(system_ffmpeg) + if len(ffmpeg_paths) == 0: + logger.error("No valid ffmpeg found.") + ffmpeg_path = None + elif len(ffmpeg_paths) == 1: + #Evaluation of suitability isn't required, can take sole option + #to reduce startup time + ffmpeg_path = ffmpeg_paths[0] + else: + ffmpeg_path = max(ffmpeg_paths, key=ffmpeg_suitability) +gifski_path = os.environ.get("VHS_GIFSKI", None) +if gifski_path is None: + gifski_path = os.environ.get("JOV_GIFSKI", None) + if gifski_path is None: + gifski_path = shutil.which("gifski") + +def get_sorted_dir_files_from_directory(directory: str, skip_first_images: int=0, select_every_nth: int=1, extensions: Iterable=None): + directory = directory.strip() + dir_files = os.listdir(directory) + dir_files = sorted(dir_files) + dir_files = [os.path.join(directory, x) for x in dir_files] + dir_files = list(filter(lambda filepath: os.path.isfile(filepath), dir_files)) + # filter by extension, if needed + if extensions is not None: + extensions = list(extensions) + new_dir_files = [] + for filepath in dir_files: + ext = "." + filepath.split(".")[-1] + if ext.lower() in extensions: + new_dir_files.append(filepath) + dir_files = new_dir_files + # start at skip_first_images + dir_files = dir_files[skip_first_images:] + dir_files = dir_files[0::select_every_nth] + return dir_files + + +# modified from https://stackoverflow.com/questions/22058048/hashing-a-file-in-python +def calculate_file_hash(filename: str, hash_every_n: int = 1): + #Larger video files were taking >.5 seconds to hash even when cached, + #so instead the modified time from the filesystem is used as a hash + h = hashlib.sha256() + h.update(filename.encode()) + h.update(str(os.path.getmtime(filename)).encode()) + return h.hexdigest() + +prompt_queue = server.PromptServer.instance.prompt_queue +def requeue_workflow_unchecked(): + """Requeues the current workflow without checking for multiple requeues""" + currently_running = prompt_queue.currently_running + (_, _, prompt, extra_data, outputs_to_execute) = next(iter(currently_running.values())) + + #Ensure batch_managers are marked stale + prompt = prompt.copy() + for uid in prompt: + if prompt[uid]['class_type'] == 'VHS_BatchManager': + prompt[uid]['inputs']['requeue'] = prompt[uid]['inputs'].get('requeue',0)+1 + + #execution.py has guards for concurrency, but server doesn't. + #TODO: Check that this won't be an issue + number = -server.PromptServer.instance.number + server.PromptServer.instance.number += 1 + prompt_id = str(server.uuid.uuid4()) + prompt_queue.put((number, prompt_id, prompt, extra_data, outputs_to_execute)) + +requeue_guard = [None, 0, 0, {}] +def requeue_workflow(requeue_required=(-1,True)): + assert(len(prompt_queue.currently_running) == 1) + global requeue_guard + (run_number, _, prompt, _, _) = next(iter(prompt_queue.currently_running.values())) + if requeue_guard[0] != run_number: + #Calculate a count of how many outputs are managed by a batch manager + managed_outputs=0 + for bm_uid in prompt: + if prompt[bm_uid]['class_type'] == 'VHS_BatchManager': + for output_uid in prompt: + if prompt[output_uid]['class_type'] in ["VHS_VideoCombine"]: + for inp in prompt[output_uid]['inputs'].values(): + if inp == [bm_uid, 0]: + managed_outputs+=1 + requeue_guard = [run_number, 0, managed_outputs, {}] + requeue_guard[1] = requeue_guard[1]+1 + requeue_guard[3][requeue_required[0]] = requeue_required[1] + if requeue_guard[1] == requeue_guard[2] and max(requeue_guard[3].values()): + requeue_workflow_unchecked() + +def get_audio(file, start_time=0, duration=0): + args = [ffmpeg_path, "-v", "error", "-i", file] + if start_time > 0: + args += ["-ss", str(start_time)] + if duration > 0: + args += ["-t", str(duration)] + try: + res = subprocess.run(args + ["-f", "wav", "-"], + stdout=subprocess.PIPE, check=True).stdout + except subprocess.CalledProcessError as e: + logger.warning(f"Failed to extract audio from: {file}") + return False + return res + + +def lazy_eval(func): + class Cache: + def __init__(self, func): + self.res = None + self.func = func + def get(self): + if self.res is None: + self.res = self.func() + return self.res + cache = Cache(func) + return lambda : cache.get() + + +def is_url(url): + return url.split("://")[0] in ["http", "https"] + +def validate_sequence(path): + #Check if path is a valid ffmpeg sequence that points to at least one file + (path, file) = os.path.split(path) + if not os.path.isdir(path): + return False + match = re.search('%0?\d+d', file) + if not match: + return False + seq = match.group() + if seq == '%d': + seq = '\\\\d+' + else: + seq = '\\\\d{%s}' % seq[1:-1] + file_matcher = re.compile(re.sub('%0?\d+d', seq, file)) + for file in os.listdir(path): + if file_matcher.fullmatch(file): + return True + return False + +def hash_path(path): + if path is None: + return "input" + if is_url(path): + return "url" + return calculate_file_hash(path.strip("\"")) + + +def validate_path(path, allow_none=False, allow_url=True): + if path is None: + return allow_none + if is_url(path): + #Probably not feasible to check if url resolves here + return True if allow_url else "URLs are unsupported for this path" + if not os.path.isfile(path.strip("\"")): + return "Invalid file path: {}".format(path) + return True diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/web/js/VHS.core.js b/custom_nodes/ComfyUI-VideoHelperSuite/web/js/VHS.core.js new file mode 100755 index 0000000000000000000000000000000000000000..522398600c07c38675b7f8860dd1a23346f8cb3b --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/web/js/VHS.core.js @@ -0,0 +1,1090 @@ +import { app } from '../../../scripts/app.js' +import { api } from '../../../scripts/api.js' +import { applyTextReplacements } from "../../../scripts/utils.js"; + +function chainCallback(object, property, callback) { + if (object == undefined) { + //This should not happen. + console.error("Tried to add callback to non-existant object") + return; + } + if (property in object) { + const callback_orig = object[property] + object[property] = function () { + const r = callback_orig.apply(this, arguments); + callback.apply(this, arguments); + return r + }; + } else { + object[property] = callback; + } +} + +function injectHidden(widget) { + widget.computeSize = (target_width) => { + if (widget.hidden) { + return [0, -4]; + } + return [target_width, 20]; + }; + widget._type = widget.type + Object.defineProperty(widget, "type", { + set : function(value) { + widget._type = value; + }, + get : function() { + if (widget.hidden) { + return "hidden"; + } + return widget._type; + } + }); +} + +const convDict = { + VHS_LoadImages : ["directory", null, "image_load_cap", "skip_first_images", "select_every_nth"], + VHS_LoadImagesPath : ["directory", "image_load_cap", "skip_first_images", "select_every_nth"], + VHS_VideoCombine : ["frame_rate", "loop_count", "filename_prefix", "format", "pingpong", "save_image"], + VHS_LoadVideo : ["video", "force_rate", "force_size", "frame_load_cap", "skip_first_frames", "select_every_nth"], + VHS_LoadVideoPath : ["video", "force_rate", "force_size", "frame_load_cap", "skip_first_frames", "select_every_nth"] +}; +const renameDict = {VHS_VideoCombine : {save_output : "save_image"}} +function useKVState(nodeType) { + chainCallback(nodeType.prototype, "onNodeCreated", function () { + chainCallback(this, "onConfigure", function(info) { + if (!this.widgets) { + //Node has no widgets, there is nothing to restore + return + } + if (typeof(info.widgets_values) != "object") { + //widgets_values is in some unknown inactionable format + return + } + let widgetDict = info.widgets_values + if (info.widgets_values.length) { + //widgets_values is in the old list format + if (this.type in convDict) { + //widget does not have a conversion format provided + let convList = convDict[this.type]; + if(info.widgets_values.length >= convList.length) { + //has all required fields + widgetDict = {} + for (let i = 0; i < convList.length; i++) { + if(!convList[i]) { + //Element should not be processed (upload button on load image sequence) + continue + } + widgetDict[convList[i]] = info.widgets_values[i]; + } + } else { + //widgets_values is missing elements marked as required + //let it fall through to failure state + } + } + } + if (widgetDict.length == undefined) { + for (let w of this.widgets) { + if (w.name in widgetDict) { + w.value = widgetDict[w.name]; + } else { + //Check for a legacy name that needs migrating + if (this.type in renameDict && w.name in renameDict[this.type]) { + if (renameDict[this.type][w.name] in widgetDict) { + w.value = widgetDict[renameDict[this.type][w.name]] + continue + } + } + //attempt to restore default value + let inputs = LiteGraph.getNodeType(this.type).nodeData.input; + let initialValue = null; + if (inputs?.required?.hasOwnProperty(w.name)) { + if (inputs.required[w.name][1]?.hasOwnProperty("default")) { + initialValue = inputs.required[w.name][1].default; + } else if (inputs.required[w.name][0].length) { + initialValue = inputs.required[w.name][0][0]; + } + } else if (inputs?.optional?.hasOwnProperty(w.name)) { + if (inputs.optional[w.name][1]?.hasOwnProperty("default")) { + initialValue = inputs.optional[w.name][1].default; + } else if (inputs.optional[w.name][0].length) { + initialValue = inputs.optional[w.name][0][0]; + } + } + if (initialValue) { + w.value = initialValue; + } + } + } + } else { + //Saved data was not a map made by this method + //and a conversion dict for it does not exist + //It's likely an array and that has been blindly applied + if (info?.widgets_values?.length != this.widgets.length) { + //Widget could not have restored properly + //Note if multiple node loads fail, only the latest error dialog displays + app.ui.dialog.show("Failed to restore node: " + this.title + "\nPlease remove and re-add it.") + this.bgcolor = "#C00" + } + } + }); + chainCallback(this, "onSerialize", function(info) { + info.widgets_values = {}; + if (!this.widgets) { + //object has no widgets, there is nothing to store + return; + } + for (let w of this.widgets) { + info.widgets_values[w.name] = w.value; + } + }); + }) +} + +function fitHeight(node) { + node.setSize([node.size[0], node.computeSize([node.size[0], node.size[1]])[1]]) + node?.graph?.setDirtyCanvas(true); +} + +async function uploadFile(file) { + //TODO: Add uploaded file to cache with Cache.put()? + try { + // Wrap file in formdata so it includes filename + const body = new FormData(); + const i = file.webkitRelativePath.lastIndexOf('/'); + const subfolder = file.webkitRelativePath.slice(0,i+1) + const new_file = new File([file], file.name, { + type: file.type, + lastModified: file.lastModified, + }); + body.append("image", new_file); + if (i > 0) { + body.append("subfolder", subfolder); + } + const resp = await api.fetchApi("/upload/image", { + method: "POST", + body, + }); + + if (resp.status === 200) { + return resp.status + } else { + alert(resp.status + " - " + resp.statusText); + } + } catch (error) { + alert(error); + } +} + +function addDateFormatting(nodeType, field, timestamp_widget = false) { + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const widget = this.widgets.find((w) => w.name === field); + widget.serializeValue = () => { + return applyTextReplacements(app, widget.value); + }; + }); +} +function addTimestampWidget(nodeType, nodeData, targetWidget) { + const newWidgets = {}; + for (let key in nodeData.input.required) { + if (key == targetWidget) { + //TODO: account for duplicate entries? + newWidgets["timestamp_directory"] = ["BOOLEAN", {"default": true}] + } + newWidgets[key] = nodeData.input.required[key]; + } + nodeDta.input.required = newWidgets; + chainCallback(nodeType.prototype, "onNodeCreated", function () { + const directoryWidget = this.widgets.find((w) => w.name === "directory_name"); + const timestampWidget = this.widgets.find((w) => w.name === "timestamp_directory"); + directoryWidget.serializeValue = () => { + if (timestampWidget.value) { + //ignore actual value and return timestamp + return formatDate("yyyy-MM-ddThh:mm:ss", new Date()); + } + return directoryWidget.value + }; + timestampWidget._value = value; + Object.definteProperty(timestampWidget, "value", { + set : function(value) { + this._value = value; + directoryWidget.disabled = value; + }, + get : function() { + return this._value; + } + }); + }); +} + +function addCustomSize(nodeType, nodeData, widgetName) { + //Add a callback which sets up the actual logic once the node is created + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const node = this; + const sizeOptionWidget = node.widgets.find((w) => w.name === widgetName); + const widthWidget = node.widgets.find((w) => w.name === "custom_width"); + const heightWidget = node.widgets.find((w) => w.name === "custom_height"); + injectHidden(widthWidget); + injectHidden(heightWidget); + sizeOptionWidget._value = sizeOptionWidget.value; + Object.defineProperty(sizeOptionWidget, "value", { + set : function(value) { + //TODO: Only modify hidden/reset size when a change occurs + if (value == "Custom Width") { + widthWidget.hidden = false; + heightWidget.hidden = true; + } else if (value == "Custom Height") { + widthWidget.hidden = true; + heightWidget.hidden = false; + } else if (value == "Custom") { + widthWidget.hidden = false; + heightWidget.hidden = false; + } else{ + widthWidget.hidden = true; + heightWidget.hidden = true; + } + node.setSize([node.size[0], node.computeSize([node.size[0], node.size[1]])[1]]) + this._value = value; + }, + get : function() { + return this._value; + } + }); + //Ensure proper visibility/size state for initial value + sizeOptionWidget.value = sizeOptionWidget._value; + + sizeOptionWidget.serializePreview = function() { + if (this.value == "Custom Width") { + return widthWidget.value + "x?"; + } else if (this.value == "Custom Height") { + return "?x" + heightWidget.value; + } else if (this.value == "Custom") { + return widthWidget.value + "x" + heightWidget.value; + } else { + return this.value; + } + }; + }); +} +function addUploadWidget(nodeType, nodeData, widgetName, type="video") { + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === widgetName); + const fileInput = document.createElement("input"); + chainCallback(this, "onRemoved", () => { + fileInput?.remove(); + }); + if (type == "folder") { + Object.assign(fileInput, { + type: "file", + style: "display: none", + webkitdirectory: true, + onchange: async () => { + const directory = fileInput.files[0].webkitRelativePath; + const i = directory.lastIndexOf('/'); + if (i <= 0) { + throw "No directory found"; + } + const path = directory.slice(0,directory.lastIndexOf('/')) + if (pathWidget.options.values.includes(path)) { + alert("A folder of the same name already exists"); + return; + } + let successes = 0; + for(const file of fileInput.files) { + if (await uploadFile(file) == 200) { + successes++; + } else { + //Upload failed, but some prior uploads may have succeeded + //Stop future uploads to prevent cascading failures + //and only add to list if an upload has succeeded + if (successes > 0) { + break + } else { + return; + } + } + } + pathWidget.options.values.push(path); + pathWidget.value = path; + if (pathWidget.callback) { + pathWidget.callback(path) + } + }, + }); + } else if (type == "video") { + Object.assign(fileInput, { + type: "file", + accept: "video/webm,video/mp4,video/mkv,image/gif", + style: "display: none", + onchange: async () => { + if (fileInput.files.length) { + if (await uploadFile(fileInput.files[0]) != 200) { + //upload failed and file can not be added to options + return; + } + const filename = fileInput.files[0].name; + pathWidget.options.values.push(filename); + pathWidget.value = filename; + if (pathWidget.callback) { + pathWidget.callback(filename) + } + } + }, + }); + } else { + throw "Unknown upload type" + } + document.body.append(fileInput); + let uploadWidget = this.addWidget("button", "choose " + type + " to upload", "image", () => { + //clear the active click event + app.canvas.node_widget = null + + fileInput.click(); + }); + uploadWidget.options.serialize = false; + }); +} + +function addVideoPreview(nodeType) { + chainCallback(nodeType.prototype, "onNodeCreated", function() { + var element = document.createElement("div"); + const previewNode = this; + var previewWidget = this.addDOMWidget("videopreview", "preview", element, { + serialize: false, + hideOnZoom: false, + getValue() { + return element.value; + }, + setValue(v) { + element.value = v; + }, + }); + previewWidget.computeSize = function(width) { + if (this.aspectRatio && !this.parentEl.hidden) { + let height = (previewNode.size[0]-20)/ this.aspectRatio + 10; + if (!(height > 0)) { + height = 0; + } + this.computedHeight = height + 10; + return [width, height]; + } + return [width, -4];//no loaded src, widget should not display + } + element.style['pointer-events'] = "none" + previewWidget.value = {hidden: false, paused: false, params: {}} + previewWidget.parentEl = document.createElement("div"); + previewWidget.parentEl.className = "vhs_preview"; + previewWidget.parentEl.style['width'] = "100%" + element.appendChild(previewWidget.parentEl); + previewWidget.videoEl = document.createElement("video"); + previewWidget.videoEl.controls = false; + previewWidget.videoEl.loop = true; + previewWidget.videoEl.muted = true; + previewWidget.videoEl.style['width'] = "100%" + previewWidget.videoEl.addEventListener("loadedmetadata", () => { + + previewWidget.aspectRatio = previewWidget.videoEl.videoWidth / previewWidget.videoEl.videoHeight; + fitHeight(this); + }); + previewWidget.videoEl.addEventListener("error", () => { + //TODO: consider a way to properly notify the user why a preview isn't shown. + previewWidget.parentEl.hidden = true; + fitHeight(this); + }); + + previewWidget.imgEl = document.createElement("img"); + previewWidget.imgEl.style['width'] = "100%" + previewWidget.imgEl.hidden = true; + previewWidget.imgEl.onload = () => { + previewWidget.aspectRatio = previewWidget.imgEl.naturalWidth / previewWidget.imgEl.naturalHeight; + fitHeight(this); + }; + + var timeout = null; + this.updateParameters = (params, force_update) => { + if (!previewWidget.value.params) { + if(typeof(previewWidget.value != 'object')) { + previewWidget.value = {hidden: false, paused: false} + } + previewWidget.value.params = {} + } + Object.assign(previewWidget.value.params, params) + if (!force_update && + !app.ui.settings.getSettingValue("VHS.AdvancedPreviews", false)) { + return; + } + if (timeout) { + clearTimeout(timeout); + } + if (force_update) { + previewWidget.updateSource(); + } else { + timeout = setTimeout(() => previewWidget.updateSource(),100); + } + }; + previewWidget.updateSource = function () { + if (this.value.params == undefined) { + return; + } + let params = {} + Object.assign(params, this.value.params);//shallow copy + this.parentEl.hidden = this.value.hidden; + if (params.format?.split('/')[0] == 'video' || + app.ui.settings.getSettingValue("VHS.AdvancedPreviews", false) && + (params.format?.split('/')[1] == 'gif') || params.format == 'folder') { + this.videoEl.autoplay = !this.value.paused && !this.value.hidden; + let target_width = 256 + if (element.style?.width) { + //overscale to allow scrolling. Endpoint won't return higher than native + target_width = element.style.width.slice(0,-2)*2; + } + if (!params.force_size || params.force_size.includes("?") || params.force_size == "Disabled") { + params.force_size = target_width+"x?" + } else { + let size = params.force_size.split("x") + let ar = parseInt(size[0])/parseInt(size[1]) + params.force_size = target_width+"x"+(target_width/ar) + } + if (app.ui.settings.getSettingValue("VHS.AdvancedPreviews", false)) { + this.videoEl.src = api.apiURL('/viewvideo?' + new URLSearchParams(params)); + } else { + previewWidget.videoEl.src = api.apiURL('/view?' + new URLSearchParams(params)); + } + this.videoEl.hidden = false; + this.imgEl.hidden = true; + } else if (params.format?.split('/')[0] == 'image'){ + //Is animated image + this.imgEl.src = api.apiURL('/view?' + new URLSearchParams(params)); + this.videoEl.hidden = true; + this.imgEl.hidden = false; + } + } + previewWidget.parentEl.appendChild(previewWidget.videoEl) + previewWidget.parentEl.appendChild(previewWidget.imgEl) + }); +} +function addPreviewOptions(nodeType) { + chainCallback(nodeType.prototype, "getExtraMenuOptions", function(_, options) { + // The intended way of appending options is returning a list of extra options, + // but this isn't used in widgetInputs.js and would require + // less generalization of chainCallback + let optNew = [] + const previewWidget = this.widgets.find((w) => w.name === "videopreview"); + + let url = null + if (previewWidget.videoEl?.hidden == false && previewWidget.videoEl.src) { + //Use full quality video + url = api.apiURL('/view?' + new URLSearchParams(previewWidget.value.params)); + } else if (previewWidget.imgEl?.hidden == false && previewWidget.imgEl.src) { + url = previewWidget.imgEl.src; + url = new URL(url); + } + if (url) { + optNew.push( + { + content: "Open preview", + callback: () => { + window.open(url, "_blank") + }, + }, + { + content: "Save preview", + callback: () => { + const a = document.createElement("a"); + a.href = url; + a.setAttribute("download", new URLSearchParams(previewWidget.value.params).get("filename")); + document.body.append(a); + a.click(); + requestAnimationFrame(() => a.remove()); + }, + } + ); + } + const PauseDesc = (previewWidget.value.paused ? "Resume" : "Pause") + " preview"; + if(previewWidget.videoEl.hidden == false) { + optNew.push({content: PauseDesc, callback: () => { + //animated images can't be paused and are more likely to cause performance issues. + //changing src to a single keyframe is possible, + //For now, the option is disabled if an animated image is being displayed + if(previewWidget.value.paused) { + previewWidget.videoEl?.play(); + } else { + previewWidget.videoEl?.pause(); + } + previewWidget.value.paused = !previewWidget.value.paused; + }}); + } + //TODO: Consider hiding elements if no video preview is available yet. + //It would reduce confusion at the cost of functionality + //(if a video preview lags the computer, the user should be able to hide in advance) + const visDesc = (previewWidget.value.hidden ? "Show" : "Hide") + " preview"; + optNew.push({content: visDesc, callback: () => { + if (!previewWidget.videoEl.hidden && !previewWidget.value.hidden) { + previewWidget.videoEl.pause(); + } else if (previewWidget.value.hidden && !previewWidget.videoEl.hidden && !previewWidget.value.paused) { + previewWidget.videoEl.play(); + } + previewWidget.value.hidden = !previewWidget.value.hidden; + previewWidget.parentEl.hidden = previewWidget.value.hidden; + fitHeight(this); + + }}); + optNew.push({content: "Sync preview", callback: () => { + //TODO: address case where videos have varying length + //Consider a system of sync groups which are opt-in? + for (let p of document.getElementsByClassName("vhs_preview")) { + for (let child of p.children) { + if (child.tagName == "VIDEO") { + child.currentTime=0; + } else if (child.tagName == "IMG") { + child.src = child.src; + } + } + } + }}); + if(options.length > 0 && options[0] != null && optNew.length > 0) { + optNew.push(null); + } + options.unshift(...optNew); + }); +} +function addFormatWidgets(nodeType) { + function parseFormats(options) { + options.fullvalues = options._values; + options._values = []; + for (let format of options.fullvalues) { + if (Array.isArray(format)) { + options._values.push(format[0]); + } else { + options._values.push(format); + } + } + } + chainCallback(nodeType.prototype, "onNodeCreated", function() { + var formatWidget = null; + var formatWidgetIndex = -1; + for(let i = 0; i < this.widgets.length; i++) { + if (this.widgets[i].name === "format"){ + formatWidget = this.widgets[i]; + formatWidgetIndex = i+1; + } + } + let formatWidgetsCount = 0; + //Pre-process options to just names + formatWidget.options._values = formatWidget.options.values; + parseFormats(formatWidget.options); + Object.defineProperty(formatWidget.options, "values", { + set : (value) => { + formatWidget.options._values = value; + parseFormats(formatWidget.options); + }, + get : () => { + return formatWidget.options._values; + } + }) + + formatWidget._value = formatWidget.value; + Object.defineProperty(formatWidget, "value", { + set : (value) => { + formatWidget._value = value; + let newWidgets = []; + const fullDef = formatWidget.options.fullvalues.find((w) => Array.isArray(w) ? w[0] === value : w === value); + if (!Array.isArray(fullDef)) { + formatWidget._value = value; + } else { + formatWidget._value = fullDef[0]; + for (let wDef of fullDef[1]) { + //create widgets. Heavy borrowed from web/scripts/app.js + //default implementation doesn't work since it automatically adds + //the widget in the wrong spot. + //TODO: consider letting this happen and just removing from list? + let w = {}; + w.name = wDef[0]; + let inputData = wDef.slice(1); + w.type = inputData[0]; + w.options = inputData[1] ? inputData[1] : {}; + if (Array.isArray(w.type)) { + w.value = w.type[0]; + w.options.values = w.type; + w.type = "combo"; + } + if(inputData[1]?.default) { + w.value = inputData[1].default; + } + if (w.type == "INT") { + Object.assign(w.options, {"precision": 0, "step": 10}) + w.callback = function (v) { + const s = this.options.step / 10; + this.value = Math.round(v / s) * s; + } + } + const typeTable = {BOOLEAN: "toggle", STRING: "text", INT: "number", FLOAT: "number"}; + if (w.type in typeTable) { + w.type = typeTable[w.type]; + } + newWidgets.push(w); + } + } + this.widgets.splice(formatWidgetIndex, formatWidgetsCount, ...newWidgets); + fitHeight(this); + formatWidgetsCount = newWidgets.length; + }, + get : () => { + return formatWidget._value; + } + }); + }); +} +function addLoadVideoCommon(nodeType, nodeData) { + addCustomSize(nodeType, nodeData, "force_size") + addVideoPreview(nodeType); + addPreviewOptions(nodeType); + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === "video"); + const frameCapWidget = this.widgets.find((w) => w.name === 'frame_load_cap'); + const frameSkipWidget = this.widgets.find((w) => w.name === 'skip_first_frames'); + const rateWidget = this.widgets.find((w) => w.name === 'force_rate'); + const skipWidget = this.widgets.find((w) => w.name === 'select_every_nth'); + const sizeWidget = this.widgets.find((w) => w.name === 'force_size'); + //widget.callback adds unused arguements which need culling + let update = function (value, _, node) { + let param = {} + param[this.name] = value + node.updateParameters(param); + } + chainCallback(frameCapWidget, "callback", update); + chainCallback(frameSkipWidget, "callback", update); + chainCallback(rateWidget, "callback", update); + chainCallback(skipWidget, "callback", update); + let priorSize = sizeWidget.value; + let updateSize = function(value, _, node) { + if (sizeWidget.value == 'Custom' || priorSize != sizeWidget.value) { + node.updateParameters({"force_size": sizeWidget.serializePreview()}); + } + priorSize = sizeWidget.value; + } + chainCallback(sizeWidget, "callback", updateSize); + chainCallback(this.widgets.find((w) => w.name === "custom_width"), "callback", updateSize); + chainCallback(this.widgets.find((w) => w.name === "custom_height"), "callback", updateSize); + + //do first load + requestAnimationFrame(() => { + for (let w of [frameCapWidget, frameSkipWidget, rateWidget, pathWidget, skipWidget]) { + w.callback(w.value, null, this); + } + }); + }); +} +function addLoadImagesCommon(nodeType, nodeData) { + addVideoPreview(nodeType); + addPreviewOptions(nodeType); + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === "directory"); + const frameCapWidget = this.widgets.find((w) => w.name === 'image_load_cap'); + const frameSkipWidget = this.widgets.find((w) => w.name === 'skip_first_images'); + const skipWidget = this.widgets.find((w) => w.name === 'select_every_nth'); + //widget.callback adds unused arguements which need culling + let update = function (value, _, node) { + let param = {} + param[this.name] = value + node.updateParameters(param); + } + chainCallback(frameCapWidget, "callback", (value, _, node) => { + node.updateParameters({frame_load_cap: value}) + }); + chainCallback(frameSkipWidget, "callback", update); + chainCallback(skipWidget, "callback", update); + //do first load + requestAnimationFrame(() => { + for (let w of [frameCapWidget, frameSkipWidget, pathWidget, skipWidget]) { + w.callback(w.value, null, this); + } + }); + }); +} + +function path_stem(path) { + let i = path.lastIndexOf("/"); + if (i >= 0) { + return [path.slice(0,i+1),path.slice(i+1)]; + } + return ["",path]; +} +function searchBox(event, [x,y], node) { + //Ensure only one dialogue shows at a time + if (this.prompt) + return; + this.prompt = true; + + let pathWidget = this; + let dialog = document.createElement("div"); + dialog.className = "litegraph litesearchbox graphdialog rounded" + dialog.innerHTML = 'Path
' + dialog.close = () => { + dialog.remove(); + } + document.body.append(dialog); + if (app.canvas.ds.scale > 1) { + dialog.style.transform = "scale(" + app.canvas.ds.scale + ")"; + } + var name_element = dialog.querySelector(".name"); + var input = dialog.querySelector(".value"); + var options_element = dialog.querySelector(".helper"); + input.value = pathWidget.value; + + var timeout = null; + let last_path = null; + let extensions = pathWidget.options.extensions + + input.addEventListener("keydown", (e) => { + dialog.is_modified = true; + if (e.keyCode == 27) { + //ESC + dialog.close(); + } else if (e.keyCode == 13 && e.target.localName != "textarea") { + pathWidget.value = input.value; + if (pathWidget.callback) { + pathWidget.callback(pathWidget.value); + } + dialog.close(); + } else { + if (e.keyCode == 9) { + //TAB + input.value = last_path + options_element.firstChild.innerText; + e.preventDefault(); + e.stopPropagation(); + } else if (e.ctrlKey && e.keyCode == 87) { + //Ctrl+w + //most browsers won't support, but it's good QOL for those that do + input.value = path_stem(input.value.slice(0,-1))[0] + e.preventDefault(); + e.stopPropagation(); + } else if (e.ctrlKey && e.keyCode == 71) { + //Ctrl+g + //Temporarily disables extension filtering to show all files + e.preventDefault(); + e.stopPropagation(); + extensions = undefined + last_path = null; + } + if (timeout) { + clearTimeout(timeout); + } + timeout = setTimeout(updateOptions, 10); + return; + } + this.prompt=false; + e.preventDefault(); + e.stopPropagation(); + }); + + var button = dialog.querySelector("button"); + button.addEventListener("click", (e) => { + pathWidget.value = input.value; + if (pathWidget.callback) { + pathWidget.callback(pathWidget.value); + } + //unsure why dirty is set here, but not on enter-key above + node.graph.setDirtyCanvas(true); + dialog.close(); + this.prompt = false; + }); + var rect = app.canvas.canvas.getBoundingClientRect(); + var offsetx = -20; + var offsety = -20; + if (rect) { + offsetx -= rect.left; + offsety -= rect.top; + } + + if (event) { + dialog.style.left = event.clientX + offsetx + "px"; + dialog.style.top = event.clientY + offsety + "px"; + } else { + dialog.style.left = canvas.width * 0.5 + offsetx + "px"; + dialog.style.top = canvas.height * 0.5 + offsety + "px"; + } + //Search code + let options = [] + function addResult(name, isDir) { + let el = document.createElement("div"); + el.innerText = name; + el.className = "litegraph lite-search-item"; + if (isDir) { + el.className += " is-dir"; + el.addEventListener("click", (e) => { + input.value = last_path+name + if (timeout) { + clearTimeout(timeout); + } + timeout = setTimeout(updateOptions, 10); + }); + } else { + el.addEventListener("click", (e) => { + pathWidget.value = last_path+name; + if (pathWidget.callback) { + pathWidget.callback(pathWidget.value); + } + dialog.close(); + pathWidget.prompt = false; + }); + } + options_element.appendChild(el); + } + async function updateOptions() { + timeout = null; + let [path, remainder] = path_stem(input.value); + if (last_path != path) { + //fetch options. Must block execution here, so update should be async? + let params = {path : path} + if (extensions) { + params.extensions = extensions + } + let optionsURL = api.apiURL('getpath?' + new URLSearchParams(params)); + try { + let resp = await fetch(optionsURL); + options = await resp.json(); + } catch(e) { + options = [] + } + last_path = path; + } + options_element.innerHTML = ''; + //filter options based on remainder + for (let option of options) { + if (option.startsWith(remainder)) { + let isDir = option.endsWith('/') + addResult(option, isDir); + } + } + } + + setTimeout(async function() { + input.focus(); + await updateOptions(); + }, 10); + + return dialog; +} + +app.ui.settings.addSetting({ + id: "VHS.AdvancedPreviews", + name: "🎥🅥🅗🅢 Advanced Previews", + type: "boolean", + defaultValue: false, +}); + +app.registerExtension({ + name: "VideoHelperSuite.Core", + async beforeRegisterNodeDef(nodeType, nodeData, app) { + if(nodeData?.name?.startsWith("VHS_")) { + useKVState(nodeType); + chainCallback(nodeType.prototype, "onNodeCreated", function () { + let new_widgets = [] + if (this.widgets) { + for (let w of this.widgets) { + let input = this.constructor.nodeData.input + let config = input?.required[w.name] ?? input.optional[w.name] + if (!config) { + continue + } + if (w?.type == "text" && config[1].vhs_path_extensions) { + new_widgets.push(app.widgets.VHSPATH({}, w.name, ["VHSPATH", config[1]])); + } else { + new_widgets.push(w) + } + } + this.widgets = new_widgets; + } + }); + } + if (nodeData?.name == "VHS_LoadImages") { + addUploadWidget(nodeType, nodeData, "directory", "folder"); + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === "directory"); + chainCallback(pathWidget, "callback", (value) => { + if (!value) { + return; + } + let params = {filename : value, type : "input", format: "folder"}; + this.updateParameters(params, true); + }); + }); + addLoadImagesCommon(nodeType, nodeData); + } else if (nodeData?.name == "VHS_LoadImagesPath") { + addUploadWidget(nodeType, nodeData, "directory", "folder"); + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === "directory"); + chainCallback(pathWidget, "callback", (value) => { + if (!value) { + return; + } + let params = {filename : value, type : "path", format: "folder"}; + this.updateParameters(params, true); + }); + }); + addLoadImagesCommon(nodeType, nodeData); + } else if (nodeData?.name == "VHS_LoadVideo") { + addUploadWidget(nodeType, nodeData, "video"); + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === "video"); + chainCallback(pathWidget, "callback", (value) => { + if (!value) { + return; + } + let parts = ["input", value]; + let extension_index = parts[1].lastIndexOf("."); + let extension = parts[1].slice(extension_index+1); + let format = "video" + if (["gif", "webp", "avif"].includes(extension)) { + format = "image" + } + format += "/" + extension; + let params = {filename : parts[1], type : parts[0], format: format}; + this.updateParameters(params, true); + }); + }); + addLoadVideoCommon(nodeType, nodeData); + } else if (nodeData?.name =="VHS_LoadVideoPath") { + chainCallback(nodeType.prototype, "onNodeCreated", function() { + const pathWidget = this.widgets.find((w) => w.name === "video"); + chainCallback(pathWidget, "callback", (value) => { + let extension_index = value.lastIndexOf("."); + let extension = value.slice(extension_index+1); + let format = "video" + if (["gif", "webp", "avif"].includes(extension)) { + format = "image" + } + format += "/" + extension; + let params = {filename : value, type: "path", format: format}; + this.updateParameters(params, true); + }); + }); + addLoadVideoCommon(nodeType, nodeData); + } else if (nodeData?.name == "VHS_VideoCombine") { + addDateFormatting(nodeType, "filename_prefix"); + chainCallback(nodeType.prototype, "onExecuted", function(message) { + if (message?.gifs) { + this.updateParameters(message.gifs[0], true); + } + }); + addVideoPreview(nodeType); + addPreviewOptions(nodeType); + addFormatWidgets(nodeType); + + //Hide the information passing 'gif' output + //TODO: check how this is implemented for save image + chainCallback(nodeType.prototype, "onNodeCreated", function() { + this._outputs = this.outputs + Object.defineProperty(this, "outputs", { + set : function(value) { + this._outputs = value; + requestAnimationFrame(() => { + if (app.nodeOutputs[this.id + ""]) { + this.updateParameters(app.nodeOutputs[this.id+""].gifs[0], true); + } + }) + }, + get : function() { + return this._outputs; + } + }); + //Display previews after reload/ loading workflow + requestAnimationFrame(() => {this.updateParameters({}, true);}); + }); + } else if (nodeData?.name == "VHS_SaveImageSequence") { + //Disabled for safety as VHS_SaveImageSequence is not currently merged + //addDateFormating(nodeType, "directory_name", timestamp_widget=true); + //addTimestampWidget(nodeType, nodeData, "directory_name") + } else if (nodeData?.name == "VHS_BatchManager") { + chainCallback(nodeType.prototype, "onNodeCreated", function() { + this.widgets.push({name: "count", type: "dummy", value: 0, + computeSize: () => {return [0,-4]}, + afterQueued: function() {this.value++;}}); + }); + } + }, + async getCustomWidgets() { + return { + VHSPATH(node, inputName, inputData) { + let w = { + name : inputName, + type : "VHS.PATH", + value : "", + draw : function(ctx, node, widget_width, y, H) { + //Adapted from litegraph.core.js:drawNodeWidgets + var show_text = app.canvas.ds.scale > 0.5; + var margin = 15; + var text_color = LiteGraph.WIDGET_TEXT_COLOR; + var secondary_text_color = LiteGraph.WIDGET_SECONDARY_TEXT_COLOR; + ctx.textAlign = "left"; + ctx.strokeStyle = LiteGraph.WIDGET_OUTLINE_COLOR; + ctx.fillStyle = LiteGraph.WIDGET_BGCOLOR; + ctx.beginPath(); + if (show_text) + ctx.roundRect(margin, y, widget_width - margin * 2, H, [H * 0.5]); + else + ctx.rect( margin, y, widget_width - margin * 2, H ); + ctx.fill(); + if (show_text) { + if(!this.disabled) + ctx.stroke(); + ctx.save(); + ctx.beginPath(); + ctx.rect(margin, y, widget_width - margin * 2, H); + ctx.clip(); + + //ctx.stroke(); + ctx.fillStyle = secondary_text_color; + const label = this.label || this.name; + if (label != null) { + ctx.fillText(label, margin * 2, y + H * 0.7); + } + ctx.fillStyle = text_color; + ctx.textAlign = "right"; + let disp_text = this.format_path(String(this.value)) + ctx.fillText(disp_text, widget_width - margin * 2, y + H * 0.7); //30 chars max + ctx.restore(); + } + }, + mouse : searchBox, + options : {}, + format_path : function(path) { + //Formats the full path to be under 30 characters + if (path.length <= 30) { + return path; + } + let filename = path_stem(path)[1] + if (filename.length > 28) { + //may all fit, but can't squeeze more info + return filename.substr(0,30); + } + //TODO: find solution for windows, path[1] == ':'? + let isAbs = path[0] == '/'; + let partial = path.substr(path.length - (isAbs ? 28:29)) + let cutoff = partial.indexOf('/'); + if (cutoff < 0) { + //Can occur, but there isn't a nicer way to format + return path.substr(path.length-30); + } + return (isAbs ? '/…':'…') + partial.substr(cutoff); + + } + }; + if (inputData.length > 1) { + if (inputData[1].vhs_path_extensions) { + w.options.extensions = inputData[1].vhs_path_extensions; + } + if (inputData[1].default) { + w.value = inputData[1].default; + } + } + + if (!node.widgets) { + node.widgets = []; + } + node.widgets.push(w); + return w; + } + } + } +}); diff --git a/custom_nodes/ComfyUI-VideoHelperSuite/web/js/videoinfo.js b/custom_nodes/ComfyUI-VideoHelperSuite/web/js/videoinfo.js new file mode 100644 index 0000000000000000000000000000000000000000..1947b47e21319268be0b614e80dc85b0da5b505b --- /dev/null +++ b/custom_nodes/ComfyUI-VideoHelperSuite/web/js/videoinfo.js @@ -0,0 +1,102 @@ +import { app } from '../../../scripts/app.js' + + +function getVideoMetadata(file) { + return new Promise((r) => { + const reader = new FileReader(); + reader.onload = (event) => { + const videoData = new Uint8Array(event.target.result); + const dataView = new DataView(videoData.buffer); + + let decoder = new TextDecoder(); + // Check for known valid magic strings + if (dataView.getUint32(0) == 0x1A45DFA3) { + //webm + //see http://wiki.webmproject.org/webm-metadata/global-metadata + //and https://www.matroska.org/technical/elements.html + //contrary to specs, tag seems consistently at start + //COMMENT + 0x4487 + packed length? + //length 0x8d8 becomes 0x48d8 + // + //description for variable length ints https://github.com/ietf-wg-cellar/ebml-specification/blob/master/specification.markdown + let offset = 4 + 8; //COMMENT is 7 chars + 1 to realign + while(offset < videoData.length-16) { + //Check for text tags + if (dataView.getUint16(offset) == 0x4487) { + //check that name of tag is COMMENT + const name = String.fromCharCode(...videoData.slice(offset-7,offset)); + if (name === "COMMENT") { + let vint = dataView.getUint32(offset+2); + let n_octets = Math.clz32(vint)+1; + if (n_octets < 4) {//250MB sanity cutoff + let length = (vint >> (8*(4-n_octets))) & ~(1 << (7*n_octets)); + const content = decoder.decode(videoData.slice(offset+2+n_octets, offset+2+n_octets+length)); + const json = JSON.parse(content); + r(json); + return; + } + } + } + offset+=1; + } + } else if (dataView.getUint32(4) == 0x66747970 && dataView.getUint32(8) == 0x69736F6D) { + //mp4 + //see https://developer.apple.com/documentation/quicktime-file-format + //Seems to make no guarantee for alignment + let offset = videoData.length-4; + while (offset > 16) {//rough safe guess + if (dataView.getUint32(offset) == 0x64617461) {//any data tag + if (dataView.getUint32(offset - 8) == 0xa9636d74) {//cmt data tag + let type = dataView.getUint32(offset+4); //seemingly 1 + let locale = dataView.getUint32(offset+8); //seemingly 0 + let size = dataView.getUint32(offset-4) - 4*4; + const content = decoder.decode(videoData.slice(offset+12, offset+12+size)); + const json = JSON.parse(content); + r(json); + return; + } + } + + offset-=1; + } + } else { + console.error("Unknown magic: " + dataView.getUint32(0)) + r(); + return; + } + + }; + + reader.readAsArrayBuffer(file); + }); +} +function isVideoFile(file) { + if (file?.name?.endsWith(".webm")) { + return true; + } + if (file?.name?.endsWith(".mp4")) { + return true; + } + + return false; +} + +let originalHandleFile = app.handleFile; +app.handleFile = handleFile; +async function handleFile(file) { + if (file?.type?.startsWith("video/") || isVideoFile(file)) { + const videoInfo = await getVideoMetadata(file); + if (videoInfo) { + if (videoInfo.workflow) { + + app.loadGraphData(videoInfo.workflow); + } + //Potentially check for/parse A1111 metadata here. + } + } else { + return await originalHandleFile.apply(this, arguments); + } +} + +//hijack comfy-file-input to allow webm/mp4 +document.getElementById("comfy-file-input").accept += ",video/webm,video/mp4"; diff --git a/custom_nodes/ComfyUI_FizzNodes/BatchFuncs.py b/custom_nodes/ComfyUI_FizzNodes/BatchFuncs.py new file mode 100644 index 0000000000000000000000000000000000000000..e81993e42ea9b85d871a071af9fb6dd8170ae801 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/BatchFuncs.py @@ -0,0 +1,435 @@ +# These nodes were made using code from the Deforum extension for A1111 webui +# You can find the project here: https://github.com/deforum-art/sd-webui-deforum + +import numexpr +import torch +import numpy as np +import pandas as pd +import re + +from .ScheduleFuncs import * + +def prepare_batch_prompt(prompt_series, max_frames, frame_idx, prompt_weight_1=0, prompt_weight_2=0, prompt_weight_3=0, + prompt_weight_4=0): # calculate expressions from the text input and return a string + max_f = max_frames - 1 + pattern = r'`.*?`' # set so the expression will be read between two backticks (``) + regex = re.compile(pattern) + prompt_parsed = str(prompt_series) + + for match in regex.finditer(prompt_parsed): + matched_string = match.group(0) + parsed_string = matched_string.replace('t', f'{frame_idx}').replace("pw_a", f"{prompt_weight_1}").replace("pw_b", + f"{prompt_weight_2}").replace("pw_c", f"{prompt_weight_3}").replace("pw_d", + f"{prompt_weight_4}").replace("max_f", + f"{max_f}").replace('`', '') # replace t, max_f and `` respectively + parsed_value = numexpr.evaluate(parsed_string) + prompt_parsed = prompt_parsed.replace(matched_string, str(parsed_value)) + return prompt_parsed.strip() + +def batch_split_weighted_subprompts(text, pre_text, app_text): + pos = {} + neg = {} + pre_text = str(pre_text) + app_text = str(app_text) + + if "--neg" in pre_text: + pre_pos, pre_neg = pre_text.split("--neg") + else: + pre_pos, pre_neg = pre_text, "" + + if "--neg" in app_text: + app_pos, app_neg = app_text.split("--neg") + else: + app_pos, app_neg = app_text, "" + + for frame, prompt in text.items(): + negative_prompts = "" + positive_prompts = "" + prompt_split = prompt.split("--neg") + + if len(prompt_split) > 1: + positive_prompts, negative_prompts = prompt_split[0], prompt_split[1] + else: + positive_prompts = prompt_split[0] + + pos[frame] = "" + neg[frame] = "" + pos[frame] += (str(pre_pos) + " " + positive_prompts + " " + str(app_pos)) + neg[frame] += (str(pre_neg) + " " + negative_prompts + " " + str(app_neg)) + if pos[frame].endswith('0'): + pos[frame] = pos[frame][:-1] + if neg[frame].endswith('0'): + neg[frame] = neg[frame][:-1] + return pos, neg + +def interpolate_prompt_series(animation_prompts, max_frames, start_frame, pre_text, app_text, prompt_weight_1=[], + prompt_weight_2=[], prompt_weight_3=[], prompt_weight_4=[], Is_print = False): + + max_f = max_frames # needed for numexpr even though it doesn't look like it's in use. + parsed_animation_prompts = {} + + + for key, value in animation_prompts.items(): + if check_is_number(key): # default case 0:(1 + t %5), 30:(5-t%2) + parsed_animation_prompts[key] = value + else: # math on the left hand side case 0:(1 + t %5), maxKeyframes/2:(5-t%2) + parsed_animation_prompts[int(numexpr.evaluate(key))] = value + + sorted_prompts = sorted(parsed_animation_prompts.items(), key=lambda item: int(item[0])) + + # Automatically set the first keyframe to 0 if it's missing + if sorted_prompts[0][0] != "0": + sorted_prompts.insert(0, ("0", sorted_prompts[0][1])) + + # Automatically set the last keyframe to the maximum number of frames + if sorted_prompts[-1][0] != str(max_frames): + sorted_prompts.append((str(max_frames), sorted_prompts[-1][1])) + # Setup containers for interpolated prompts + cur_prompt_series = pd.Series([np.nan for a in range(max_frames)]) + nxt_prompt_series = pd.Series([np.nan for a in range(max_frames)]) + + # simple array for strength values + weight_series = [np.nan] * max_frames + + # in case there is only one keyed promt, set all prompts to that prompt + if len(sorted_prompts) == 1: + for i in range(0, len(cur_prompt_series) - 1): + current_prompt = sorted_prompts[0][1] + cur_prompt_series[i] = str(current_prompt) + nxt_prompt_series[i] = str(current_prompt) + + # Initialized outside of loop for nan check + current_key = 0 + next_key = 0 + + if type(prompt_weight_1) in {int, float}: + prompt_weight_1 = tuple([prompt_weight_1] * max_frames) + + if type(prompt_weight_2) in {int, float}: + prompt_weight_2 = tuple([prompt_weight_2] * max_frames) + + if type(prompt_weight_3) in {int, float}: + prompt_weight_3 = tuple([prompt_weight_3] * max_frames) + + if type(prompt_weight_4) in {int, float}: + prompt_weight_4 = tuple([prompt_weight_4] * max_frames) + + # For every keyframe prompt except the last + for i in range(0, len(sorted_prompts) - 1): + # Get current and next keyframe + current_key = int(sorted_prompts[i][0]) + next_key = int(sorted_prompts[i + 1][0]) + + # Ensure there's no weird ordering issues or duplication in the animation prompts + # (unlikely because we sort above, and the json parser will strip dupes) + if current_key >= next_key: + print( + f"WARNING: Sequential prompt keyframes {i}:{current_key} and {i + 1}:{next_key} are not monotonously increasing; skipping interpolation.") + continue + + # Get current and next keyframes' positive and negative prompts (if any) + current_prompt = sorted_prompts[i][1] + next_prompt = sorted_prompts[i + 1][1] + + # Calculate how much to shift the weight from current to next prompt at each frame. + weight_step = 1 / (next_key - current_key) + + for f in range(max(current_key, 0), min(next_key, len(cur_prompt_series))): + next_weight = weight_step * (f - current_key) + current_weight = 1 - next_weight + + # add the appropriate prompts and weights to their respective containers. + weight_series[f] = 0.0 + cur_prompt_series[f] = str(current_prompt) + nxt_prompt_series[f] = str(next_prompt) + + weight_series[f] += current_weight + + current_key = next_key + next_key = max_frames + current_weight = 0.0 + + index_offset = 0 + # Evaluate the current and next prompt's expressions + for i in range(start_frame, len(cur_prompt_series)): + cur_prompt_series[i] = prepare_batch_prompt(cur_prompt_series[i], max_frames, i, prompt_weight_1[i], + prompt_weight_2[i], prompt_weight_3[i], prompt_weight_4[i]) + nxt_prompt_series[i] = prepare_batch_prompt(nxt_prompt_series[i], max_frames, i, prompt_weight_1[i], + prompt_weight_2[i], prompt_weight_3[i], prompt_weight_4[i]) + if Is_print == True: + # Show the to/from prompts with evaluated expressions for transparency. + print("\n", "Max Frames: ", max_frames, "\n", "frame index: ", (start_frame + i), "\n", "Current Prompt: ", + cur_prompt_series[i], "\n", "Next Prompt: ", nxt_prompt_series[i], "\n", "Strength : ", + weight_series[i], "\n") + index_offset = index_offset + 1 + + + + # Output methods depending if the prompts are the same or if the current frame is a keyframe. + # if it is an in-between frame and the prompts differ, composable diffusion will be performed. + return (cur_prompt_series, nxt_prompt_series, weight_series) + + +def BatchPoolAnimConditioning(cur_prompt_series, nxt_prompt_series, weight_series, clip): + pooled_out = [] + cond_out = [] + + for i in range(len(cur_prompt_series)): + tokens = clip.tokenize(str(cur_prompt_series[i])) + cond_to, pooled_to = clip.encode_from_tokens(tokens, return_pooled=True) + + if i < len(nxt_prompt_series): + tokens = clip.tokenize(str(nxt_prompt_series[i])) + cond_from, pooled_from = clip.encode_from_tokens(tokens, return_pooled=True) + else: + cond_from, pooled_from = torch.zeros_like(cond_to), torch.zeros_like(pooled_to) + + interpolated_conditioning = addWeighted([[cond_to, {"pooled_output": pooled_to}]], + [[cond_from, {"pooled_output": pooled_from}]], + weight_series[i]) + + interpolated_cond = interpolated_conditioning[0][0] + interpolated_pooled = interpolated_conditioning[0][1].get("pooled_output", pooled_from) + + cond_out.append(interpolated_cond) + pooled_out.append(interpolated_pooled) + + final_pooled_output = torch.cat(pooled_out, dim=0) + final_conditioning = torch.cat(cond_out, dim=0) + + return [[final_conditioning, {"pooled_output": final_pooled_output}]] + + + + + +def BatchGLIGENConditioning(cur_prompt_series, nxt_prompt_series, weight_series, clip): + pooled_out = [] + cond_out = [] + + for i in range(len(cur_prompt_series)): + tokens = clip.tokenize(str(cur_prompt_series[i])) + cond_to, pooled_to = clip.encode_from_tokens(tokens, return_pooled=True) + + tokens = clip.tokenize(str(nxt_prompt_series[i])) + cond_from, pooled_from = clip.encode_from_tokens(tokens, return_pooled=True) + + interpolated_conditioning = addWeighted([[cond_to, {"pooled_output": pooled_to}]], + [[cond_from, {"pooled_output": pooled_from}]], + weight_series[i]) + + interpolated_cond = interpolated_conditioning[0][0] + interpolated_pooled = interpolated_conditioning[0][1].get("pooled_output", pooled_from) + + pooled_out.append(interpolated_pooled) + cond_out.append(interpolated_cond) + + final_pooled_output = torch.cat(pooled_out, dim=0) + final_conditioning = torch.cat(cond_out, dim=0) + + return cond_out, pooled_out + +def BatchPoolAnimConditioningSDXL(cur_prompt_series, nxt_prompt_series, weight_series, clip): + pooled_out = [] + cond_out = [] + + for i in range(len(cur_prompt_series)): + interpolated_conditioning = addWeighted(cur_prompt_series[i], + nxt_prompt_series[i], + weight_series[i]) + + interpolated_cond = interpolated_conditioning[0][0] + interpolated_pooled = interpolated_conditioning[0][1].get("pooled_output") + + pooled_out.append(interpolated_pooled) + cond_out.append(interpolated_cond) + + final_pooled_output = torch.cat(pooled_out, dim=0) + final_conditioning = torch.cat(cond_out, dim=0) + + return [[final_conditioning, {"pooled_output": final_pooled_output}]] + + +def BatchInterpolatePromptsSDXL(animation_promptsG, animation_promptsL, max_frames, clip, app_text_G, + app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, + crop_h, target_width, target_height, Is_print = False): + + # parse the conditioning strength and determine in-betweens. + # Get prompts sorted by keyframe + max_f = max_frames # needed for numexpr even though it doesn't look like it's in use. + parsed_animation_promptsG = {} + parsed_animation_promptsL = {} + for key, value in animation_promptsG.items(): + if check_is_number(key): # default case 0:(1 + t %5), 30:(5-t%2) + parsed_animation_promptsG[key] = value + else: # math on the left hand side case 0:(1 + t %5), maxKeyframes/2:(5-t%2) + parsed_animation_promptsG[int(numexpr.evaluate(key))] = value + + sorted_prompts_G = sorted(parsed_animation_promptsG.items(), key=lambda item: int(item[0])) + + for key, value in animation_promptsL.items(): + if check_is_number(key): # default case 0:(1 + t %5), 30:(5-t%2) + parsed_animation_promptsL[key] = value + else: # math on the left hand side case 0:(1 + t %5), maxKeyframes/2:(5-t%2) + parsed_animation_promptsL[int(numexpr.evaluate(key))] = value + + sorted_prompts_L = sorted(parsed_animation_promptsL.items(), key=lambda item: int(item[0])) + + # Setup containers for interpolated prompts + cur_prompt_series_G = pd.Series([np.nan for a in range(max_frames)]) + nxt_prompt_series_G = pd.Series([np.nan for a in range(max_frames)]) + + cur_prompt_series_L = pd.Series([np.nan for a in range(max_frames)]) + nxt_prompt_series_L = pd.Series([np.nan for a in range(max_frames)]) + + # simple array for strength values + weight_series = [np.nan] * max_frames + + # in case there is only one keyed promt, set all prompts to that prompt + if len(sorted_prompts_G) - 1 == 0: + for i in range(0, len(cur_prompt_series_G) - 1): + current_prompt_G = sorted_prompts_G[0][1] + cur_prompt_series_G[i] = str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G) + nxt_prompt_series_G[i] = str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G) + + if len(sorted_prompts_L) - 1 == 0: + for i in range(0, len(cur_prompt_series_L) - 1): + current_prompt_L = sorted_prompts_L[0][1] + cur_prompt_series_L[i] = str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L) + nxt_prompt_series_L[i] = str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L) + + # Initialized outside of loop for nan check + current_key = 0 + next_key = 0 + + # For every keyframe prompt except the last + for i in range(0, len(sorted_prompts_G) - 1): + # Get current and next keyframe + current_key = int(sorted_prompts_G[i][0]) + next_key = int(sorted_prompts_G[i + 1][0]) + + # Ensure there's no weird ordering issues or duplication in the animation prompts + # (unlikely because we sort above, and the json parser will strip dupes) + if current_key >= next_key: + print( + f"WARNING: Sequential prompt keyframes {i}:{current_key} and {i + 1}:{next_key} are not monotonously increasing; skipping interpolation.") + continue + + # Get current and next keyframes' positive and negative prompts (if any) + current_prompt_G = sorted_prompts_G[i][1] + next_prompt_G = sorted_prompts_G[i + 1][1] + + # Calculate how much to shift the weight from current to next prompt at each frame. + weight_step = 1 / (next_key - current_key) + + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + current_weight = 1 - next_weight + + # add the appropriate prompts and weights to their respective containers. + if f < max_frames: + cur_prompt_series_G[f] = '' + nxt_prompt_series_G[f] = '' + weight_series[f] = 0.0 + + cur_prompt_series_G[f] += (str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G)) + nxt_prompt_series_G[f] += (str(pre_text_G) + " " + str(next_prompt_G) + " " + str(app_text_G)) + + weight_series[f] += current_weight + + current_key = next_key + next_key = max_frames + current_weight = 0.0 + # second loop to catch any nan runoff + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + + # add the appropriate prompts and weights to their respective containers. + cur_prompt_series_G[f] = '' + nxt_prompt_series_G[f] = '' + weight_series[f] = current_weight + + cur_prompt_series_G[f] += (str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G)) + nxt_prompt_series_G[f] += (str(pre_text_G) + " " + str(next_prompt_G) + " " + str(app_text_G)) + + # Reset outside of loop for nan check + current_key = 0 + next_key = 0 + + # For every keyframe prompt except the last + for i in range(0, len(sorted_prompts_L) - 1): + # Get current and next keyframe + current_key = int(sorted_prompts_L[i][0]) + next_key = int(sorted_prompts_L[i + 1][0]) + + # Ensure there's no weird ordering issues or duplication in the animation prompts + # (unlikely because we sort above, and the json parser will strip dupes) + if current_key >= next_key: + print( + f"WARNING: Sequential prompt keyframes {i}:{current_key} and {i + 1}:{next_key} are not monotonously increasing; skipping interpolation.") + continue + + # Get current and next keyframes' positive and negative prompts (if any) + current_prompt_L = sorted_prompts_L[i][1] + next_prompt_L = sorted_prompts_L[i + 1][1] + + # Calculate how much to shift the weight from current to next prompt at each frame. + weight_step = 1 / (next_key - current_key) + + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + current_weight = 1 - next_weight + + # add the appropriate prompts and weights to their respective containers. + if f < max_frames: + cur_prompt_series_L[f] = '' + nxt_prompt_series_L[f] = '' + weight_series[f] = 0.0 + + cur_prompt_series_L[f] += (str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L)) + nxt_prompt_series_L[f] += (str(pre_text_L) + " " + str(next_prompt_L) + " " + str(app_text_L)) + + weight_series[f] += current_weight + + current_key = next_key + next_key = max_frames + current_weight = 0.0 + # second loop to catch any nan runoff + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + + # add the appropriate prompts and weights to their respective containers. + cur_prompt_series_L[f] = '' + nxt_prompt_series_L[f] = '' + weight_series[f] = current_weight + + cur_prompt_series_L[f] += (str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L)) + nxt_prompt_series_L[f] += (str(pre_text_L) + " " + str(next_prompt_L) + " " + str(app_text_L)) + + # Evaluate the current and next prompt's expressions + for i in range(0, max_frames): + cur_prompt_series_G[i] = prepare_batch_prompt(cur_prompt_series_G[i], max_frames, i, + pw_a, pw_b, pw_c, pw_d) + nxt_prompt_series_G[i] = prepare_batch_prompt(nxt_prompt_series_G[i], max_frames, i, + pw_a, pw_b, pw_c, pw_d) + cur_prompt_series_L[i] = prepare_batch_prompt(cur_prompt_series_L[i], max_frames, i, + pw_a, pw_b, pw_c, pw_d) + nxt_prompt_series_L[i] = prepare_batch_prompt(nxt_prompt_series_L[i], max_frames, i, + pw_a, pw_b, pw_c, pw_d) + + current_conds = [] + next_conds = [] + for i in range(0, max_frames): + current_conds.append(SDXLencode(clip, width, height, crop_w, crop_h, target_width, target_height, + cur_prompt_series_G[i], cur_prompt_series_L[i])) + next_conds.append(SDXLencode(clip, width, height, crop_w, crop_h, target_width, target_height, + nxt_prompt_series_G[i], nxt_prompt_series_L[i])) + + if Is_print == True: + # Show the to/from prompts with evaluated expressions for transparency. + for i in range(0, max_frames): + print("\n", "Max Frames: ", max_frames, "\n", "Current Prompt G: ", cur_prompt_series_G[i], + "\n", "Current Prompt L: ", cur_prompt_series_L[i], "\n", "Next Prompt G: ", nxt_prompt_series_G[i], + "\n", "Next Prompt L : ", nxt_prompt_series_L[i], "\n"), "\n", "Current weight: ", weight_series[i] + + return BatchPoolAnimConditioningSDXL(current_conds, next_conds, weight_series, clip) diff --git a/custom_nodes/ComfyUI_FizzNodes/FrameNodes.py b/custom_nodes/ComfyUI_FizzNodes/FrameNodes.py new file mode 100644 index 0000000000000000000000000000000000000000..741426fd2b69912cbbf4fb5508cb8dded540fe33 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/FrameNodes.py @@ -0,0 +1,222 @@ +class StringConcatenate: + def __init__(self): + pass + + defaultPrompt = """"0" :"", + "12" :"", + "24" :"", + "36" :"", + "48" :"", + "60" :"", + "72" :"", + "84" :"", + "96" :"", + "108" :"", + "120" :"" + """ + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "text_a": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_a": ("INT", {"default": 0}), + "text_b": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_b": ("INT", {"default": 12}) + }, + "optional": { + "text_c": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_c": ("INT", {"default": 24}), + "text_d": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_d": ("INT", {"default": 36}), + "text_e": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_e": ("INT", {"default": 48}), + "text_f": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_f": ("INT", {"default": 60}), + "text_g": ("STRING", {"forceInput": True, "multiline": True, "default": ""}), + "frame_g": ("INT", {"default": 72}) + } + } + RETURN_TYPES = ("STRING",) + FUNCTION = "frame_concatenate_list" + + CATEGORY = "FizzNodes 📅🅕🅝/FrameNodes" + + def frame_concatenate_list(self, text_a, frame_a, text_b, frame_b, text_c=None, frame_c=None, text_d=None, + frame_d=None, text_e=None, frame_e=None, text_f=None, frame_f=None, text_g=None, + frame_g=None): + + text_a = text_a.replace('\n', '') + text_b = text_b.replace('\n', '') + text_c = text_c.replace('\n', '') if text_c is not None else None + text_d = text_d.replace('\n', '') if text_d is not None else None + text_e = text_e.replace('\n', '') if text_e is not None else None + text_f = text_f.replace('\n', '') if text_f is not None else None + text_g = text_g.replace('\n', '') if text_g is not None else None + + text_list = f'"{frame_a}": "{text_a}",' + text_list += f'"{frame_b}": "{text_b}",' + + if frame_c is not None and text_c is not None: + text_list += f'"{frame_c}": "{text_c}",' + + if frame_d is not None and text_d is not None: + text_list += f'"{frame_d}": "{text_d}",' + + if frame_e is not None and text_e is not None: + text_list += f'"{frame_e}": "{text_e}",' + + if frame_f is not None and text_f is not None: + text_list += f'"{frame_f}": "{text_f}",' + + if frame_g is not None and text_g is not None: + text_list += f'"{frame_g}": "{text_g}",' + + return (text_list,) + + +class InitNodeFrame: + def __init__(self): + self.frames = {} + self.thisFrame = {} + + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "frame": ("INT", {"default": 0, "min": 0}), + "positive_text": ("STRING", {"multiline": True}), + }, + "optional": { + "negative_text": ("STRING", {"multiline": True}), + "general_positive": ("STRING", {"multiline": True}), + "general_negative": ("STRING", {"multiline": True}), + "previous_frame": ("FIZZFRAME", {"forceInput": True}), + "clip": ("CLIP",), + } + } + RETURN_TYPES = ("FIZZFRAME","CONDITIONING","CONDITIONING",) + FUNCTION = "create_frame" + + CATEGORY = "FizzNodes 📅🅕🅝/FrameNodes" + + def create_frame(self, frame, positive_text, negative_text=None, general_positive=None, general_negative=None, previous_frame=None, clip=None): + new_frame = { + "positive_text": positive_text, + "negative_text": negative_text, + } + + if previous_frame: + prev_frame = previous_frame.thisFrame + new_frame["general_positive"] = prev_frame["general_positive"] + new_frame["general_negative"] = prev_frame["general_negative"] + new_frame["clip"] = prev_frame["clip"] + self.frames = previous_frame.frames + + if general_positive: + new_frame["general_positive"] = general_positive + + if general_negative: + new_frame["general_negative"] = general_negative + + new_positive_text = f"{positive_text}, {new_frame['general_positive']}" + new_negative_text = f"{negative_text}, {new_frame['general_negative']}" + + if clip: + new_frame["clip"] = clip + + pos_tokens = new_frame["clip"].tokenize(new_positive_text) + pos_cond, pos_pooled = new_frame["clip"].encode_from_tokens(pos_tokens, return_pooled=True) + new_frame["pos_conditioning"] = {"cond": pos_cond, "pooled": pos_pooled} + + neg_tokens = new_frame["clip"].tokenize(new_negative_text) + neg_cond, neg_pooled = new_frame["clip"].encode_from_tokens(neg_tokens, return_pooled=True) + new_frame["neg_conditioning"] = {"cond": neg_cond, "pooled": neg_pooled} + + self.frames[frame] = new_frame + self.thisFrame = new_frame + + return (self, [[pos_cond, {"pooled_output": pos_pooled}]], [[neg_cond, {"pooled_output": neg_pooled}]]) + +class NodeFrame: + + def __init__(self): + self.frames = {} + self.thisFrame = {} + + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "frame": ("INT", {"default": 0, "min": 0}), + "previous_frame": ("FIZZFRAME", {"forceInput": True}), + "positive_text": ("STRING", {"multiline": True}), + }, + "optional": { + "negative_text": ("STRING", {"multiline": True}), + } + } + RETURN_TYPES = ("FIZZFRAME","CONDITIONING","CONDITIONING",) + FUNCTION = "create_frame" + + CATEGORY = "FizzNodes 📅🅕🅝/FrameNodes" + + def create_frame(self, frame, previous_frame, positive_text, negative_text=None): + self.frames = previous_frame.frames + prev_frame = previous_frame.thisFrame + + new_positive_text = f"{positive_text}, {prev_frame['general_positive']}" + new_negative_text = f"{negative_text}, {prev_frame['general_negative']}" + + pos_tokens = prev_frame["clip"].tokenize(new_positive_text) + pos_cond, pos_pooled = prev_frame["clip"].encode_from_tokens(pos_tokens, return_pooled=True) + + neg_tokens = prev_frame["clip"].tokenize(new_negative_text) + neg_cond, neg_pooled = prev_frame["clip"].encode_from_tokens(neg_tokens, return_pooled=True) + + new_frame = { + "positive_text": positive_text, + "negative_text": negative_text, + "general_positive": prev_frame["general_positive"], + "general_negative": prev_frame["general_negative"], + "clip": prev_frame["clip"], + "pos_conditioning": {"cond": pos_cond, "pooled": pos_pooled}, + "neg_conditioning": {"cond": neg_cond, "pooled": neg_pooled}, + } + self.thisFrame = new_frame + self.frames[frame] = new_frame + + return (self, [[pos_cond, {"pooled_output": pos_pooled}]], [[neg_cond, {"pooled_output": neg_pooled}]]) + +class FrameConcatenate: + def __init__(self): + pass + + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "frame": ("FIZZFRAME", {"forceInput": True}) + }, + } + RETURN_TYPES = ("STRING",) + FUNCTION = "frame_concatenate" + + CATEGORY = "FizzNodes 📅🅕🅝/FrameNodes" + + def frame_concatenate(self, frame): + text_list = "" + for frame_digit in frame.frames: + new_frame = frame.frames[frame_digit] + text_list += f'"{frame_digit}": "{new_frame["positive_text"]}' + if new_frame.get("general_positive"): + text_list += f', {new_frame["general_positive"]}' + if new_frame.get("negative_text") or new_frame.get("general_negative"): + text_list += f', --neg ' + if new_frame.get("negative_text"): + text_list += f', {new_frame["negative_text"]}' + if new_frame.get("general_negative"): + text_list += f', {new_frame["general_negative"]}' + text_list += f'",\n' + text_list = text_list[:-2] + + return (text_list,) \ No newline at end of file diff --git a/custom_nodes/ComfyUI_FizzNodes/HelperNodes.py b/custom_nodes/ComfyUI_FizzNodes/HelperNodes.py new file mode 100644 index 0000000000000000000000000000000000000000..9030346ede313f7ac649566cf21438a921a1fc0b --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/HelperNodes.py @@ -0,0 +1,59 @@ + +class CalculateFrameOffset: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "current_frame": ("INT", {"default": 0, "min": 0}), + "max_frames": ("INT", {"default": 18, "min": 0}), + "num_latent_inputs": ("INT", {"default": 4, "min": 0}), + "index": ("INT", {"default": 4, "min": 0}), + } + } + RETURN_TYPES = ("INT", ) + FUNCTION = "assignFrameNum" + + CATEGORY = "FizzNodes 📅🅕🅝/HelperNodes" + + def assignFrameNum(self, current_frame, max_frames, num_latent_inputs, index): + if current_frame == 0: + return (index,) + else: + start_frame = (current_frame - 1) * (num_latent_inputs - 1) + (num_latent_inputs-1) + return ((start_frame + index) % max_frames,) +class ConcatStringSingle: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "string_a": ("STRING", {"forceInput":True,"default":"","multiline": True}), + "string_b": ("STRING", {"forceInput":True,"default":"","multiline": True}), + } + } + RETURN_TYPES = ("STRING", ) + FUNCTION = "concat" + + CATEGORY = "FizzNodes 📅🅕🅝/HelperNodes" + + def concat(self, string_a, string_b): + c = string_a + string_b + return (c,) + +class convertKeyframeKeysToBatchKeys: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "input": ("INT", {"forceInput": True, "default": 0}), + "num_latents": ("INT", {"default": 16}), + } + } + + RETURN_TYPES = ("INT",) + FUNCTION = "concat" + + CATEGORY = "FizzNodes 📅🅕🅝/HelperNodes" + + def concat(self, input, num_latents): + c = input * num_latents -1 + return (c,) \ No newline at end of file diff --git a/custom_nodes/ComfyUI_FizzNodes/LICENCE.txt b/custom_nodes/ComfyUI_FizzNodes/LICENCE.txt new file mode 100644 index 0000000000000000000000000000000000000000..d95e1fa9bf81a3ecd46c45912653f0054b8058b8 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/LICENCE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023 Fizzledorf + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/custom_nodes/ComfyUI_FizzNodes/README.md b/custom_nodes/ComfyUI_FizzNodes/README.md new file mode 100644 index 0000000000000000000000000000000000000000..fc8d1642579980aa8c62de3fb10d02ef2bb725c9 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/README.md @@ -0,0 +1,78 @@ + +# FizzNodes +Scheduled prompts, scheduled float/int values and wave function nodes for animations and utility. compatable with https://www.framesync.xyz/ and https://www.chigozie.co.uk/keyframe-string-generator/ for audio synced animations in [Comfyui](https://github.com/comfyanonymous/ComfyUI). + +** Please see the [Fizznodes wiki](https://github.com/FizzleDorf/ComfyUI_FizzNodes/wiki) for instructions on usage of these nodes as well as handy resources you can use in your projects! ** + + +## Installation + +For the easiest install experience, install the [Comfyui Manager](https://github.com/ltdrdata/ComfyUI-Manager) and use that to automate the installation process. +Otherwise, to manually install, simply clone the repo into the custom_nodes directory with this command: +``` +git clone https://github.com/FizzleDorf/ComfyUI_FizzNodes.git +``` +and install the requirements using: +``` +.\python_embed\python.exe -s -m pip install -r requirements.txt +``` +If you are using a venv, make sure you have it activated before installation and use: +``` +pip install -r requirements.txt +``` + +Example | Instructions +---|--- +![Fizznodes menu](https://github.com/FizzleDorf/ComfyUI_FizzNodes/assets/46942135/e07fedba-648c-4300-a6ac-61873b1501ab)|The nodes will can be accessed in the FizzNodes section of the node menu. You can also use the node search to find the nodes you are looking for. + +----- + +## Examples +Some examples using the prompt and value schedulers using base comfyui. + +### Simple Animation Workflow +This example showcases making animations with only scheduled prompts. This method only uses 4.7 GB of memory and makes use of deterministic samplers (Euler in this case). + + +![output](https://github.com/FizzleDorf/ComfyUI_FizzNodes/assets/46942135/82f21ab2-209c-43d7-a202-67d99fd3c823) + + +Drag and drop the image in this link into ComfyUI to load the workflow or save the image and load it using the load button. + +[Txt2_Img_Example](https://github.com/FizzleDorf/ComfyUI_FizzNodes/assets/46942135/8899f25e-fbc8-423c-bef2-e7c5a91fb7f4) + + +### Noisy Latent Comp Workflow +This example showcases the [Noisy Laten Composition](https://comfyanonymous.github.io/ComfyUI_examples/noisy_latent_composition/) workflow. The value schedule node schedules the latent composite node's x position. You can also animate the subject while the composite node is being schedules as well! + +![output](https://github.com/FizzleDorf/ComfyUI_FizzNodes/assets/46942135/6ffe1078-1869-4b7a-990f-902b7eafd67d) + + +Drag and drop the image in this link into ComfyUI to load the workflow or save the image and load it using the load button. + +[Latent_Comp_Example](https://github.com/FizzleDorf/ComfyUI_FizzNodes/assets/46942135/410fbd99-d06e-489a-b6f5-3b747acd3740) + + +## Helpful tools + +Just a list of tools that you may find handy using these nodes. + +Link | Description +--- | --- +[Desmos Graphing Calculator](https://www.desmos.com/calculator) | online graphing calculator. Handy for visualizing expressions. +[Keyframe String Generator](https://www.chigozie.co.uk/keyframe-string-generator/) | custom keyframe string generator that is compatable with the valueSchedule node. +[Audio framesync](https://www.framesync.xyz/) | Audio sync wave functions. Exports keyframes for the valueSchedule node. +[SD-Parseq](https://github.com/rewbs/sd-parseq) | A powerful scheduling tool for audio sync and easy curve manupulation (my personal fave!) +----- + +## Acknowledgments + +**A special thanks to:** + +-The developers of [Deforum](https://github.com/deforum-art/sd-webui-deforum) for providing code for these nodes and being overall awesome people! + +-Comfyanonamous and the rest of the [ComfyUI](https://github.com/comfyanonymous/ComfyUI/tree/master) contributors for a fantastic UI! + +-All the friends I met along the way that motivate me into action! + +-and you the user! I hope you have fun using these nodes and exploring latent space. diff --git a/custom_nodes/ComfyUI_FizzNodes/ScheduleFuncs.py b/custom_nodes/ComfyUI_FizzNodes/ScheduleFuncs.py new file mode 100644 index 0000000000000000000000000000000000000000..63af241871887e17d6455fa2ca6788ffafd7c7ed --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/ScheduleFuncs.py @@ -0,0 +1,419 @@ +#These nodes were made using code from the Deforum extension for A1111 webui +#You can find the project here: https://github.com/deforum-art/sd-webui-deforum + +import numexpr +import torch +import numpy as np +import pandas as pd +import re +import json + +#functions used by PromptSchedule nodes + +#Addweighted function from Comfyui +def addWeighted(conditioning_to, conditioning_from, conditioning_to_strength): + out = [] + + if len(conditioning_from) > 1: + print("Warning: ConditioningAverage conditioning_from contains more than 1 cond, only the first one will actually be applied to conditioning_to.") + + cond_from = conditioning_from[0][0] + pooled_output_from = conditioning_from[0][1].get("pooled_output", None) + + for i in range(len(conditioning_to)): + t1 = conditioning_to[i][0] + pooled_output_to = conditioning_to[i][1].get("pooled_output", pooled_output_from) + + max_size = max(t1.shape[1], cond_from.shape[1]) + t0 = pad_with_zeros(cond_from, max_size) + t1 = pad_with_zeros(t1, max_size) + + tw = torch.mul(t1, conditioning_to_strength) + torch.mul(t0, (1.0 - conditioning_to_strength)) + t_to = conditioning_to[i][1].copy() + + if pooled_output_from is not None and pooled_output_to is not None: + # Pad pooled outputs if available + pooled_output_to = pad_with_zeros(pooled_output_to, max_size) + pooled_output_from = pad_with_zeros(pooled_output_from, max_size) + t_to["pooled_output"] = torch.mul(pooled_output_to, conditioning_to_strength) + torch.mul(pooled_output_from, (1.0 - conditioning_to_strength)) + elif pooled_output_from is not None: + t_to["pooled_output"] = pooled_output_from + + n = [tw, t_to] + out.append(n) + + return out + +def pad_with_zeros(tensor, target_length): + current_length = tensor.shape[1] + if current_length < target_length: + padding = torch.zeros(tensor.shape[0], target_length - current_length, tensor.shape[2]).to(tensor.device) + tensor = torch.cat([tensor, padding], dim=1) + return tensor + +def reverseConcatenation(final_conditioning, final_pooled_output, max_frames): + # Split the final_conditioning and final_pooled_output tensors into their original components + cond_out = torch.split(final_conditioning, max_frames) + pooled_out = torch.split(final_pooled_output, max_frames) + + return cond_out, pooled_out + +def check_is_number(value): + float_pattern = r'^(?=.)([+-]?([0-9]*)(\.([0-9]+))?)$' + return re.match(float_pattern, value) + +def split_weighted_subprompts(text, frame=0, pre_text='', app_text=''): + pre_text = str(pre_text) + app_text = str(app_text) + + if "--neg" in pre_text: + pre_pos, pre_neg = pre_text.split("--neg") + else: + pre_pos, pre_neg = pre_text, "" + + if "--neg" in app_text: + app_pos, app_neg = app_text.split("--neg") + else: + app_pos, app_neg = app_text, "" + + # Check if the text is a string; if not, convert it to a string + if not isinstance(text, str): + text = str(text) + + math_parser = re.compile("(?P(`[\S\s]*?`))", re.VERBOSE) + + parsed_prompt = re.sub(math_parser, lambda m: str(parse_weight(m, frame)), text) + + negative_prompts = "" + positive_prompts = "" + + # Check if the last character is '0' and remove it + prompt_split = parsed_prompt.split("--neg") + if len(prompt_split) > 1: + positive_prompts, negative_prompts = prompt_split[0], prompt_split[1] + else: + positive_prompts = prompt_split[0] + + pos = {} + neg = {} + pos[frame] = (str(pre_pos) + " " + str(positive_prompts) + " " + str(app_pos)) + neg[frame] = (str(pre_neg) + " " + str(negative_prompts) + " " + str(app_neg)) + if pos[frame].endswith('0'): + pos[frame] = pos[frame][:-1] + if neg[frame].endswith('0'): + neg[frame] = neg[frame][:-1] + + return pos, neg + +def parse_weight(match, frame=0, max_frames=0) -> float: #calculate weight steps for in-betweens + w_raw = match.group("weight") + max_f = max_frames # this line has to be left intact as it's in use by numexpr even though it looks like it doesn't + if w_raw is None: + return 1 + if check_is_number(w_raw): + return float(w_raw) + else: + t = frame + if len(w_raw) < 3: + print('the value inside `-characters cannot represent a math function') + return 1 + return float(numexpr.evaluate(w_raw[1:-1])) + +def prepare_prompt(prompt_series, max_frames, frame_idx, prompt_weight_1 = 0, prompt_weight_2 = 0, prompt_weight_3 = 0, prompt_weight_4 = 0): #calculate expressions from the text input and return a string + max_f = max_frames - 1 + pattern = r'`.*?`' #set so the expression will be read between two backticks (``) + regex = re.compile(pattern) + prompt_parsed = str(prompt_series) + for match in regex.finditer(prompt_parsed): + matched_string = match.group(0) + parsed_string = matched_string.replace('t', f'{frame_idx}').replace("pw_a", f"prompt_weight_1").replace("pw_b", f"prompt_weight_2").replace("pw_c", f"prompt_weight_3").replace("pw_d", f"prompt_weight_4").replace("max_f", f"{max_f}").replace('`', '') #replace t, max_f and `` respectively + parsed_value = numexpr.evaluate(parsed_string) + prompt_parsed = prompt_parsed.replace(matched_string, str(parsed_value)) + return prompt_parsed.strip() + +def interpolate_string(animation_prompts, max_frames, current_frame, pre_text, app_text, prompt_weight_1, + prompt_weight_2, prompt_weight_3, + prompt_weight_4): # parse the conditioning strength and determine in-betweens. + # Get prompts sorted by keyframe + max_f = max_frames # needed for numexpr even though it doesn't look like it's in use. + parsed_animation_prompts = {} + for key, value in animation_prompts.items(): + if check_is_number(key): # default case 0:(1 + t %5), 30:(5-t%2) + parsed_animation_prompts[key] = value + else: # math on the left hand side case 0:(1 + t %5), maxKeyframes/2:(5-t%2) + parsed_animation_prompts[int(numexpr.evaluate(key))] = value + + sorted_prompts = sorted(parsed_animation_prompts.items(), key=lambda item: int(item[0])) + + # Setup containers for interpolated prompts + cur_prompt_series = pd.Series([np.nan for a in range(max_frames)]) + + # simple array for strength values + weight_series = [np.nan] * max_frames + + # in case there is only one keyed promt, set all prompts to that prompt + if len(sorted_prompts) - 1 == 0: + for i in range(0, len(cur_prompt_series) - 1): + current_prompt = sorted_prompts[0][1] + cur_prompt_series[i] = str(pre_text) + " " + str(current_prompt) + " " + str(app_text) + + # Initialized outside of loop for nan check + current_key = 0 + next_key = 0 + + # For every keyframe prompt except the last + for i in range(0, len(sorted_prompts) - 1): + # Get current and next keyframe + current_key = int(sorted_prompts[i][0]) + next_key = int(sorted_prompts[i + 1][0]) + + # Ensure there's no weird ordering issues or duplication in the animation prompts + # (unlikely because we sort above, and the json parser will strip dupes) + if current_key >= next_key: + print( + f"WARNING: Sequential prompt keyframes {i}:{current_key} and {i + 1}:{next_key} are not monotonously increasing; skipping interpolation.") + continue + + # Get current and next keyframes' positive and negative prompts (if any) + current_prompt = sorted_prompts[i][1] + + for f in range(current_key, next_key): + # add the appropriate prompts and weights to their respective containers. + cur_prompt_series[f] = '' + weight_series[f] = 0.0 + + cur_prompt_series[f] += (str(pre_text) + " " + str(current_prompt) + " " + str(app_text)) + + current_key = next_key + next_key = max_frames + # second loop to catch any nan runoff + + for f in range(current_key, next_key): + # add the appropriate prompts and weights to their respective containers. + cur_prompt_series[f] = '' + cur_prompt_series[f] += (str(pre_text) + " " + str(current_prompt) + " " + str(app_text)) + + # Evaluate the current and next prompt's expressions + cur_prompt_series[current_frame] = prepare_prompt(cur_prompt_series[current_frame], max_frames, current_frame, + prompt_weight_1, prompt_weight_2, prompt_weight_3, + prompt_weight_4) + + # Show the to/from prompts with evaluated expressions for transparency. + print("\n", "Max Frames: ", max_frames, "\n", "Current Prompt: ", cur_prompt_series[current_frame], "\n") + + # Output methods depending if the prompts are the same or if the current frame is a keyframe. + # if it is an in-between frame and the prompts differ, composable diffusion will be performed. + return (cur_prompt_series[current_frame]) +def PoolAnimConditioning(cur_prompt, nxt_prompt, weight, clip): + if str(cur_prompt) == str(nxt_prompt): + tokens = clip.tokenize(str(cur_prompt)) + cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) + return [[cond, {"pooled_output": pooled}]] + + if weight == 1: + tokens = clip.tokenize(str(cur_prompt)) + cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) + return [[cond, {"pooled_output": pooled}]] + + if weight == 0: + tokens = clip.tokenize(str(nxt_prompt)) + cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) + return [[cond, {"pooled_output": pooled}]] + else: + tokens = clip.tokenize(str(nxt_prompt)) + cond_from, pooled_from = clip.encode_from_tokens(tokens, return_pooled=True) + tokens = clip.tokenize(str(cur_prompt)) + cond_to, pooled_to = clip.encode_from_tokens(tokens, return_pooled=True) + return addWeighted([[cond_to, {"pooled_output": pooled_to}]], [[cond_from, {"pooled_output": pooled_from}]], weight) + +def SDXLencode(clip, width, height, crop_w, crop_h, target_width, target_height, text_g, text_l): + tokens = clip.tokenize(text_g) + tokens["l"] = clip.tokenize(text_l)["l"] + if len(tokens["l"]) != len(tokens["g"]): + empty = clip.tokenize("") + while len(tokens["l"]) < len(tokens["g"]): + tokens["l"] += empty["l"] + while len(tokens["l"]) > len(tokens["g"]): + tokens["g"] += empty["g"] + cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) + return [[cond, {"pooled_output": pooled, "width": width, "height": height, "crop_w": crop_w, "crop_h": crop_h, "target_width": target_width, "target_height": target_height}]] + +def interpolate_prompts_SDXL(animation_promptsG, animation_promptsL, max_frames, current_frame, clip, app_text_G, app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, crop_h, target_width, target_height, print_output): #parse the conditioning strength and determine in-betweens. + #Get prompts sorted by keyframe + max_f = max_frames #needed for numexpr even though it doesn't look like it's in use. + parsed_animation_promptsG = {} + parsed_animation_promptsL = {} + for key, value in animation_promptsG.items(): + if check_is_number(key): #default case 0:(1 + t %5), 30:(5-t%2) + parsed_animation_promptsG[key] = value + else: #math on the left hand side case 0:(1 + t %5), maxKeyframes/2:(5-t%2) + parsed_animation_promptsG[int(numexpr.evaluate(key))] = value + + sorted_prompts_G = sorted(parsed_animation_promptsG.items(), key=lambda item: int(item[0])) + + for key, value in animation_promptsL.items(): + if check_is_number(key): #default case 0:(1 + t %5), 30:(5-t%2) + parsed_animation_promptsL[key] = value + else: #math on the left hand side case 0:(1 + t %5), maxKeyframes/2:(5-t%2) + parsed_animation_promptsL[int(numexpr.evaluate(key))] = value + + sorted_prompts_L = sorted(parsed_animation_promptsL.items(), key=lambda item: int(item[0])) + + #Setup containers for interpolated prompts + cur_prompt_series_G = pd.Series([np.nan for a in range(max_frames)]) + nxt_prompt_series_G = pd.Series([np.nan for a in range(max_frames)]) + + cur_prompt_series_L = pd.Series([np.nan for a in range(max_frames)]) + nxt_prompt_series_L = pd.Series([np.nan for a in range(max_frames)]) + + #simple array for strength values + weight_series = [np.nan] * max_frames + + #in case there is only one keyed promt, set all prompts to that prompt + if len(sorted_prompts_G) - 1 == 0: + for i in range(0, len(cur_prompt_series_G)-1): + current_prompt_G = sorted_prompts_G[0][1] + cur_prompt_series_G[i] = str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G) + nxt_prompt_series_G[i] = str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G) + + if len(sorted_prompts_L) - 1 == 0: + for i in range(0, len(cur_prompt_series_L)-1): + current_prompt_L = sorted_prompts_L[0][1] + cur_prompt_series_L[i] = str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L) + nxt_prompt_series_L[i] = str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L) + + + + #Initialized outside of loop for nan check + current_key = 0 + next_key = 0 + + # For every keyframe prompt except the last + for i in range(0, len(sorted_prompts_G) - 1): + # Get current and next keyframe + current_key = int(sorted_prompts_G[i][0]) + next_key = int(sorted_prompts_G[i + 1][0]) + + # Ensure there's no weird ordering issues or duplication in the animation prompts + # (unlikely because we sort above, and the json parser will strip dupes) + if current_key >= next_key: + print(f"WARNING: Sequential prompt keyframes {i}:{current_key} and {i + 1}:{next_key} are not monotonously increasing; skipping interpolation.") + continue + + # Get current and next keyframes' positive and negative prompts (if any) + current_prompt_G = sorted_prompts_G[i][1] + next_prompt_G = sorted_prompts_G[i + 1][1] + + # Calculate how much to shift the weight from current to next prompt at each frame. + weight_step = 1 / (next_key - current_key) + + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + current_weight = 1 - next_weight + + #add the appropriate prompts and weights to their respective containers. + cur_prompt_series_G[f] = '' + nxt_prompt_series_G[f] = '' + weight_series[f] = 0.0 + + cur_prompt_series_G[f] += (str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G)) + nxt_prompt_series_G[f] += (str(pre_text_G) + " " + str(next_prompt_G) + " " + str(app_text_G)) + + weight_series[f] += current_weight + + current_key = next_key + next_key = max_frames + current_weight = 0.0 + #second loop to catch any nan runoff + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + + #add the appropriate prompts and weights to their respective containers. + cur_prompt_series_G[f] = '' + nxt_prompt_series_G[f] = '' + weight_series[f] = current_weight + + cur_prompt_series_G[f] += (str(pre_text_G) + " " + str(current_prompt_G) + " " + str(app_text_G)) + nxt_prompt_series_G[f] += (str(pre_text_G) + " " + str(next_prompt_G) + " " + str(app_text_G)) + + + #Reset outside of loop for nan check + current_key = 0 + next_key = 0 + + # For every keyframe prompt except the last + for i in range(0, len(sorted_prompts_L) - 1): + # Get current and next keyframe + current_key = int(sorted_prompts_L[i][0]) + next_key = int(sorted_prompts_L[i + 1][0]) + + # Ensure there's no weird ordering issues or duplication in the animation prompts + # (unlikely because we sort above, and the json parser will strip dupes) + if current_key >= next_key: + print(f"WARNING: Sequential prompt keyframes {i}:{current_key} and {i + 1}:{next_key} are not monotonously increasing; skipping interpolation.") + continue + + # Get current and next keyframes' positive and negative prompts (if any) + current_prompt_L = sorted_prompts_L[i][1] + next_prompt_L = sorted_prompts_L[i + 1][1] + + # Calculate how much to shift the weight from current to next prompt at each frame. + weight_step = 1 / (next_key - current_key) + + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + current_weight = 1 - next_weight + + #add the appropriate prompts and weights to their respective containers. + cur_prompt_series_L[f] = '' + nxt_prompt_series_L[f] = '' + weight_series[f] = 0.0 + + cur_prompt_series_L[f] += (str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L)) + nxt_prompt_series_L[f] += (str(pre_text_L) + " " + str(next_prompt_L) + " " + str(app_text_L)) + + weight_series[f] += current_weight + + current_key = next_key + next_key = max_frames + current_weight = 0.0 + #second loop to catch any nan runoff + for f in range(current_key, next_key): + next_weight = weight_step * (f - current_key) + + #add the appropriate prompts and weights to their respective containers. + cur_prompt_series_L[f] = '' + nxt_prompt_series_L[f] = '' + weight_series[f] = current_weight + + cur_prompt_series_L[f] += (str(pre_text_L) + " " + str(current_prompt_L) + " " + str(app_text_L)) + nxt_prompt_series_L[f] += (str(pre_text_L) + " " + str(next_prompt_L) + " " + str(app_text_L)) + + #Evaluate the current and next prompt's expressions + cur_prompt_series_G[current_frame] = prepare_prompt(cur_prompt_series_G[current_frame], max_frames, current_frame, pw_a, pw_b, pw_c, pw_d) + nxt_prompt_series_G[current_frame] = prepare_prompt(nxt_prompt_series_G[current_frame], max_frames, current_frame, pw_a, pw_b, pw_c, pw_d) + cur_prompt_series_L[current_frame] = prepare_prompt(cur_prompt_series_L[current_frame], max_frames, current_frame, pw_a, pw_b, pw_c, pw_d) + nxt_prompt_series_L[current_frame] = prepare_prompt(nxt_prompt_series_L[current_frame], max_frames, current_frame, pw_a, pw_b, pw_c, pw_d) + if print_output == True: + #Show the to/from prompts with evaluated expressions for transparency. + print("\n", "G_Clip:", "\n", "Max Frames: ", max_frames, "\n", "Current Prompt: ", cur_prompt_series_G[current_frame], "\n", "Next Prompt: ", nxt_prompt_series_G[current_frame], "\n", "Strength : ", weight_series[current_frame], "\n") + + print("\n", "L_Clip:", "\n", "Max Frames: ", max_frames, "\n", "Current Prompt: ", cur_prompt_series_L[current_frame], "\n", "Next Prompt: ", nxt_prompt_series_L[current_frame], "\n", "Strength : ", weight_series[current_frame], "\n") + + #Output methods depending if the prompts are the same or if the current frame is a keyframe. + #if it is an in-between frame and the prompts differ, composable diffusion will be performed. + current_cond = SDXLencode(clip, width, height, crop_w, crop_h, target_width, target_height, cur_prompt_series_G[current_frame], cur_prompt_series_L[current_frame]) + + if str(cur_prompt_series_G[current_frame]) == str(nxt_prompt_series_G[current_frame]) and str(cur_prompt_series_L[current_frame]) == str(nxt_prompt_series_L[current_frame]): + return current_cond + + if weight_series[current_frame] == 1: + return current_cond + + if weight_series[current_frame] == 0: + next_cond = SDXLencode(clip, width, height, crop_w, crop_h, target_width, target_height, cur_prompt_series_G[current_frame], cur_prompt_series_L[current_frame]) + return next_cond + + else: + next_cond = SDXLencode(clip, width, height, crop_w, crop_h, target_width, target_height, cur_prompt_series_G[current_frame], cur_prompt_series_L[current_frame]) + return addWeighted(current_cond, next_cond, weight_series[current_frame]) \ No newline at end of file diff --git a/custom_nodes/ComfyUI_FizzNodes/ScheduledNodes.py b/custom_nodes/ComfyUI_FizzNodes/ScheduledNodes.py new file mode 100644 index 0000000000000000000000000000000000000000..34ee501fb67be6f3ed909710b77704eaeeb5660a --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/ScheduledNodes.py @@ -0,0 +1,651 @@ +#These nodes were made using code from the Deforum extension for A1111 webui +#You can find the project here: https://github.com/deforum-art/sd-webui-deforum +import comfy +import numexpr +import torch +import numpy as np +import pandas as pd +import re +import json + + +from .ScheduleFuncs import ( + check_is_number, interpolate_prompts_SDXL, PoolAnimConditioning, + interpolate_string, addWeighted, reverseConcatenation, split_weighted_subprompts +) +from .BatchFuncs import interpolate_prompt_series, BatchPoolAnimConditioning, BatchInterpolatePromptsSDXL, batch_split_weighted_subprompts #, BatchGLIGENConditioning +from .ValueFuncs import batch_get_inbetweens, batch_parse_key_frames, parse_key_frames, get_inbetweens, sanitize_value +#Max resolution value for Gligen area calculation. +MAX_RESOLUTION=8192 + +defaultPrompt=""""0" :"", +"12" :"", +"24" :"", +"36" :"", +"48" :"", +"60" :"", +"72" :"", +"84" :"", +"96" :"", +"108" :"", +"120" :"" +""" + +defaultValue="""0:(0), +12:(0), +24:(0), +36:(0), +48:(0), +60:(0), +72:(0), +84:(0), +96:(0), +108:(0), +120:(0) +""" + +#This node parses the user's formatted prompt, +#sequences the current prompt,next prompt, and +#conditioning strength, evaluates expressions in +#the prompts, and then returns either current, +#next or averaged conditioning. +class PromptSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default":defaultPrompt}), + "clip": ("CLIP", ), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "current_frame": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0,}), + "print_output":("BOOLEAN", {"default": False}),},# "forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True,}),# "forceInput": True}), + "app_text": ("STRING", {"multiline": True,}),# "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + }} + + RETURN_TYPES = ("CONDITIONING", "CONDITIONING",) + RETURN_NAMES = ("POS", "NEG",) + FUNCTION = "animate" + CATEGORY = "FizzNodes 📅🅕🅝/ScheduleNodes" + + def animate(self, text, max_frames, print_output, current_frame, clip, pw_a=0, pw_b=0, pw_c=0, pw_d=0, pre_text='', app_text=''): + current_frame = current_frame % max_frames + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + animation_prompts = json.loads(inputText.strip()) + start_frame = 0 + pos, neg = batch_split_weighted_subprompts(animation_prompts, pre_text, app_text) + + pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a, + pw_b, pw_c, pw_d, print_output) + pc = PoolAnimConditioning(pos_cur_prompt[current_frame], pos_nxt_prompt[current_frame], weight[current_frame], clip) + + neg_cur_prompt, neg_nxt_prompt, weight = interpolate_prompt_series(neg, max_frames, start_frame, pre_text, app_text, pw_a, + pw_b, pw_c, pw_d, print_output) + nc = PoolAnimConditioning(neg_cur_prompt[current_frame], neg_nxt_prompt[current_frame], weight[current_frame], clip) + + return (pc, nc,) + +class BatchPromptSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default": defaultPrompt}), + "clip": ("CLIP",), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "print_output":("BOOLEAN", {"default": False}),}, + # "forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True}), # "forceInput": True}), + "app_text": ("STRING", {"multiline": True}), # "forceInput": True}), + "start_frame": ("INT", {"default": 0, "min": 0, "max": 9999, "step": 1, }), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + }} + + RETURN_TYPES = ("CONDITIONING", "CONDITIONING",) + RETURN_NAMES = ("POS", "NEG",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, text, max_frames, print_output, clip, start_frame, pw_a, pw_b, pw_c, pw_d, pre_text='', app_text=''): + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + max_frames += start_frame + animation_prompts = json.loads(inputText.strip()) + pos, neg = batch_split_weighted_subprompts(animation_prompts, pre_text, app_text) + + pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a, pw_b, pw_c, pw_d, print_output) + pc = BatchPoolAnimConditioning( pos_cur_prompt, pos_nxt_prompt, weight, clip,) + + neg_cur_prompt, neg_nxt_prompt, weight = interpolate_prompt_series(neg, max_frames, start_frame, pre_text, app_text, pw_a, pw_b, pw_c, pw_d, print_output) + nc = BatchPoolAnimConditioning(neg_cur_prompt, neg_nxt_prompt, weight, clip, ) + + return (pc, nc, ) + +class BatchPromptScheduleLatentInput: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default": defaultPrompt}), + "clip": ("CLIP",), + "num_latents": ("LATENT", ), + "print_output":("BOOLEAN", {"default": False}),}, + # "forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "app_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "start_frame": ("INT", {"default": 0.0, "min": 0, "max": 9999, "step": 1, }), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + }} + + RETURN_TYPES = ("CONDITIONING", "CONDITIONING", "LATENT", ) + RETURN_NAMES = ("POS", "NEG", "INPUT_LATENTS",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, text, num_latents, print_output, clip, start_frame, pw_a, pw_b, pw_c, pw_d, pre_text='', app_text=''): + max_frames = sum(tensor.size(0) for tensor in num_latents.values()) + max_frames += start_frame + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + + animation_prompts = json.loads(inputText.strip()) + pos, neg = batch_split_weighted_subprompts(animation_prompts, pre_text, app_text) + + pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, + app_text, pw_a, pw_b, pw_c, pw_d, + print_output) + pc = BatchPoolAnimConditioning(pos_cur_prompt, pos_nxt_prompt, weight, clip, ) + + neg_cur_prompt, neg_nxt_prompt, weight = interpolate_prompt_series(neg, max_frames, start_frame, pre_text, + app_text, pw_a, pw_b, pw_c, pw_d, + print_output) + nc = BatchPoolAnimConditioning(neg_cur_prompt, neg_nxt_prompt, weight, clip, ) + + return (pc, nc, num_latents,) +class StringSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default": defaultPrompt}), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "current_frame": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0, })}, + # "forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "app_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + }} + + RETURN_TYPES = ("STRING",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/ScheduleNodes" + + def animate(self, text, max_frames, current_frame, pw_a=0, pw_b=0, pw_c=0, pw_d=0, pre_text='', app_text=''): + current_frame = current_frame % max_frames + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + animation_prompts = json.loads(inputText.strip()) + cur_prompt = interpolate_string(animation_prompts, max_frames, current_frame, pre_text, + app_text, pw_a, pw_b, pw_c, pw_d) + #c = PoolAnimConditioning(cur_prompt, nxt_prompt, weight, clip, ) + return (cur_prompt,) + +class PromptScheduleSDXLRefiner: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "ascore": ("FLOAT", {"default": 6.0, "min": 0.0, "max": 1000.0, "step": 0.01}), + "width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "text": ("STRING", {"multiline": True, "default":defaultPrompt}), "clip": ("CLIP", ), + }} + RETURN_TYPES = ("CONDITIONING",) + FUNCTION = "encode" + + CATEGORY = "advanced/conditioning" + + def encode(self, clip, ascore, width, height, text): + tokens = clip.tokenize(text) + cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) + return ([[cond, {"pooled_output": pooled, "aesthetic_score": ascore, "width": width,"height": height}]], ) + +class BatchStringSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default": defaultPrompt}), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}),}, + # "forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "app_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + }} + + RETURN_TYPES = ("STRING",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, text, max_frames, pw_a=0, pw_b=0, pw_c=0, pw_d=0, pre_text='', app_text=''): + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + start_frame = 0 + animation_prompts = json.loads(inputText.strip()) + cur_prompt_series, nxt_prompt_series, weight_series = interpolate_prompt_series(animation_prompts, max_frames, start_frame, pre_text, + app_text, pw_a, pw_b, pw_c, pw_d) + return (cur_prompt_series,) + +class BatchPromptScheduleEncodeSDXL: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "crop_w": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION}), + "crop_h": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION}), + "target_width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "target_height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "text_g": ("STRING", {"multiline": True, }), "clip": ("CLIP", ), + "text_l": ("STRING", {"multiline": True, }), "clip": ("CLIP", ), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "print_output":("BOOLEAN", {"default": False}),}, + "optional": {"pre_text_G": ("STRING", {"multiline": True, }),# "forceInput": True}), + "app_text_G": ("STRING", {"multiline": True, }),# "forceInput": True}), + "pre_text_L": ("STRING", {"multiline": True, }),# "forceInput": True}), + "app_text_L": ("STRING", {"multiline": True, }),# "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + }} + RETURN_TYPES = ("CONDITIONING",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, clip, width, height, crop_w, crop_h, target_width, target_height, text_g, text_l, app_text_G, app_text_L, pre_text_G, pre_text_L, max_frames, print_output, pw_a, pw_b, pw_c, pw_d): + inputTextG = str("{" + text_g + "}") + inputTextL = str("{" + text_l + "}") + inputTextG = re.sub(r',\s*}', '}', inputTextG) + inputTextL = re.sub(r',\s*}', '}', inputTextL) + animation_promptsG = json.loads(inputTextG.strip()) + animation_promptsL = json.loads(inputTextL.strip()) + return (BatchInterpolatePromptsSDXL(animation_promptsG, animation_promptsL, max_frames, clip, app_text_G, app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, crop_h, target_width, target_height, print_output,),) + +class BatchPromptScheduleEncodeSDXLLatentInput: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "crop_w": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION}), + "crop_h": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION}), + "target_width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "target_height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "text_g": ("STRING", {"multiline": True, }), "clip": ("CLIP", ), + "text_l": ("STRING", {"multiline": True, }), "clip": ("CLIP", ), + "num_latents": ("LATENT", ), + "print_output":("BOOLEAN", {"default": False}),}, + "optional": {"pre_text_G": ("STRING", {"multiline": True, }),# "forceInput": True}), + "app_text_G": ("STRING", {"multiline": True, }),# "forceInput": True}), + "pre_text_L": ("STRING", {"multiline": True, }),# "forceInput": True}), + "app_text_L": ("STRING", {"multiline": True, }),# "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + }} + RETURN_TYPES = ("CONDITIONING", "LATENT",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, clip, width, height, crop_w, crop_h, target_width, target_height, text_g, text_l, app_text_G, app_text_L, pre_text_G, pre_text_L, num_latents, print_output, pw_a, pw_b, pw_c, pw_d): + max_frames = sum(tensor.size(0) for tensor in num_latents.values()) + inputTextG = str("{" + text_g + "}") + inputTextL = str("{" + text_l + "}") + inputTextG = re.sub(r',\s*}', '}', inputTextG) + inputTextL = re.sub(r',\s*}', '}', inputTextL) + animation_promptsG = json.loads(inputTextG.strip()) + animation_promptsL = json.loads(inputTextL.strip()) + return (BatchInterpolatePromptsSDXL(animation_promptsG, animation_promptsL, max_frames, clip, app_text_G, app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, crop_h, target_width, target_height, print_output, ), num_latents, ) + +class PromptScheduleEncodeSDXL: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "crop_w": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION}), + "crop_h": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION}), + "target_width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "target_height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "text_g": ("STRING", {"multiline": True, }), "clip": ("CLIP", ), + "text_l": ("STRING", {"multiline": True, }), "clip": ("CLIP", ), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "current_frame": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0}), + "print_output":("BOOLEAN", {"default": False})}, + "optional": {"pre_text_G": ("STRING", {"multiline": True, }),# "forceInput": True}), + "app_text_G": ("STRING", {"multiline": True, }),# "forceInput": True}), + "pre_text_L": ("STRING", {"multiline": True, }),# "forceInput": True}), + "app_text_L": ("STRING", {"multiline": True, }),# "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}), #"forceInput": True }), + }} + RETURN_TYPES = ("CONDITIONING",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/ScheduleNodes" + + def animate(self, clip, width, height, crop_w, crop_h, target_width, target_height, text_g, text_l, app_text_G, app_text_L, pre_text_G, pre_text_L, max_frames, current_frame, print_output, pw_a, pw_b, pw_c, pw_d): + current_frame = current_frame % max_frames + inputTextG = str("{" + text_g + "}") + inputTextL = str("{" + text_l + "}") + inputTextG = re.sub(r',\s*}', '}', inputTextG) + inputTextL = re.sub(r',\s*}', '}', inputTextL) + animation_promptsG = json.loads(inputTextG.strip()) + animation_promptsL = json.loads(inputTextL.strip()) + return (interpolate_prompts_SDXL(animation_promptsG, animation_promptsL, max_frames, current_frame, clip, app_text_G, app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, crop_h, target_width, target_height, print_output,),) + +# This node schedules the prompt using separate nodes as the keyframes. +# The values in the prompt are evaluated in NodeFlowEnd. +class PromptScheduleNodeFlow: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True}), + "num_frames": ("INT", {"default": 24.0, "min": 0.0, "max": 9999.0, "step": 1.0}),}, + "optional": {"in_text": ("STRING", {"multiline": False, }), # "forceInput": True}), + "max_frames": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0,})}} # "forceInput": True}),}} + + RETURN_TYPES = ("INT","STRING",) + FUNCTION = "addString" + CATEGORY = "FizzNodes 📅🅕🅝/ScheduleNodes" + + def addString(self, text, in_text='', max_frames=0, num_frames=0): + if in_text: + # Remove trailing comma from in_text if it exists + in_text = in_text.rstrip(',') + + new_max = num_frames + max_frames + + if max_frames == 0: + # Construct a new JSON object with a single key-value pair + new_text = in_text + (', ' if in_text else '') + f'"{max_frames}": "{text}"' + else: + # Construct a new JSON object with a single key-value pair + new_text = in_text + (', ' if in_text else '') + f'"{new_max}": "{text}"' + + + + return (new_max, new_text,) + + +#Last node in the Node Flow for evaluating the json produced by the above node. +class PromptScheduleNodeFlowEnd: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": False, "forceInput": True}), + "clip": ("CLIP", ), + "max_frames": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0,}), + "print_output": ("BOOLEAN", {"default": False}), + "current_frame": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0,}),}, #"forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True, }),#"forceInput": True}), + "app_text": ("STRING", {"multiline": True, }),#"forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + }} + RETURN_TYPES = ("CONDITIONING","CONDITIONING",) + RETURN_NAMES = ("POS", "NEG",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/ScheduleNodes" + + def animate(self, text, max_frames, print_output, current_frame, clip, pw_a = 0, pw_b = 0, pw_c = 0, pw_d = 0, pre_text = '', app_text = ''): + current_frame = current_frame % max_frames + if text[-1] == ",": + text = text[:-1] + if text[0] == ",": + text = text[:0] + start_frame = 0 + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + animation_prompts = json.loads(inputText.strip()) + max_frames += start_frame + pos, neg = batch_split_weighted_subprompts(animation_prompts, pre_text, app_text) + + pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a, + pw_b, pw_c, pw_d, print_output) + pc = PoolAnimConditioning(pos_cur_prompt[current_frame], pos_nxt_prompt[current_frame], weight[current_frame], + clip, ) + + neg_cur_prompt, neg_nxt_prompt, weight = interpolate_prompt_series(neg, max_frames, start_frame, pre_text, app_text, pw_a, + pw_b, pw_c, pw_d, print_output) + nc = PoolAnimConditioning(neg_cur_prompt[current_frame], neg_nxt_prompt[current_frame], weight[current_frame], + clip, ) + + return (pc, nc,) + +class BatchPromptScheduleNodeFlowEnd: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": False, "forceInput": True}), + "clip": ("CLIP", ), + "max_frames": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0,}), + "print_output": ("BOOLEAN", {"default": False}), + }, + "optional": {"pre_text": ("STRING", {"multiline": False, }),#"forceInput": True}), + "app_text": ("STRING", {"multiline": False, }),#"forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1,}),# "forceInput": True}), + }} + RETURN_TYPES = ("CONDITIONING",) + + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, text, max_frames, start_frame, print_output, clip, pw_a=0, pw_b=0, pw_c=0, pw_d=0, pre_text='', current_frame = 0, + app_text=''): + if text[-1] == ",": + text = text[:-1] + if text[0] == ",": + text = text[:0] + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + animation_prompts = json.loads(inputText.strip()) + + max_frames += start_frame + + pos, neg = batch_split_weighted_subprompts(animation_prompts, pre_text, app_text) + + pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a, + pw_b, pw_c, pw_d, print_output) + pc = BatchPoolAnimConditioning(pos_cur_prompt[current_frame], pos_nxt_prompt[current_frame], weight[current_frame], + clip, ) + + neg_cur_prompt, neg_nxt_prompt, weight = interpolate_prompt_series(neg, max_frames, start_frame, pre_text, app_text, pw_a, + pw_b, pw_c, pw_d, print_output) + nc = BatchPoolAnimConditioning(neg_cur_prompt[current_frame], neg_nxt_prompt[current_frame], weight[current_frame], + clip, ) + + return (pc, nc,) + +class BatchGLIGENSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"conditioning_to": ("CONDITIONING",), + "clip": ("CLIP",), + "gligen_textbox_model": ("GLIGEN",), + "text": ("STRING", {"multiline": True, "default":defaultPrompt}), + "width": ("INT", {"default": 64, "min": 8, "max": MAX_RESOLUTION, "step": 8}), + "height": ("INT", {"default": 64, "min": 8, "max": MAX_RESOLUTION, "step": 8}), + "x": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 8}), + "y": ("INT", {"default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 8}), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "print_output":("BOOLEAN", {"default": False})}, + # "forceInput": True}),}, + "optional": {"pre_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "app_text": ("STRING", {"multiline": True, }), # "forceInput": True}), + "pw_a": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_b": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_c": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + "pw_d": ("FLOAT", {"default": 0.0, "min": -9999.0, "max": 9999.0, "step": 0.1, }), + # "forceInput": True }), + }} + + RETURN_TYPES = ("CONDITIONING",) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, conditioning_to, clip, gligen_textbox_model, text, width, height, x, y, max_frames, print_output, pw_a, pw_b, pw_c, pw_d, pre_text='', app_text=''): + inputText = str("{" + text + "}") + inputText = re.sub(r',\s*}', '}', inputText) + animation_prompts = json.loads(inputText.strip()) + + cur_series, nxt_series, weight_series = interpolate_prompt_series(animation_prompts, max_frames, pre_text, app_text, pw_a, pw_b, pw_c, pw_d, print_output) + out = [] + for i in range(0, max_frames - 1): + # Calculate changes in x and y here, based on your logic + x_change = 8 + y_change = 0 + + # Update x and y values + x += x_change + y += y_change + print(x) + print(y) + out.append(self.append(conditioning_to, clip, gligen_textbox_model, pre_text, width, height, x, y)) + + return (out,) + + def append(self, conditioning_to, clip, gligen_textbox_model, text, width, height, x, y): + c = [] + cond, cond_pooled = clip.encode_from_tokens(clip.tokenize(text), return_pooled=True) + for t in range(0, len(conditioning_to)): + n = [conditioning_to[t][0], conditioning_to[t][1].copy()] + position_params = [(cond_pooled, height // 8, width // 8, y // 8, x // 8)] + prev = [] + if "gligen" in n[1]: + prev = n[1]['gligen'][2] + + n[1]['gligen'] = ("position", gligen_textbox_model, prev + position_params) + c.append(n) + return c + +#This node parses the user's test input into +#interpolated floats. Expressions can be input +#and evaluated. +class ValueSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default":defaultValue}), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "current_frame": ("INT", {"default": 0.0, "min": 0.0, "max": 999999.0, "step": 1.0,}),# "forceInput": True}), + "print_output": ("BOOLEAN", {"default": False})}} + RETURN_TYPES = ("FLOAT", "INT") + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/ScheduleNodes" + + def animate(self, text, max_frames, current_frame, print_output): + current_frame = current_frame % max_frames + t = get_inbetweens(parse_key_frames(text, max_frames), max_frames) + if (print_output is True): + print("ValueSchedule: ",current_frame,"\n","current_frame: ",current_frame) + return (t[current_frame],int(t[current_frame]),) + +class BatchValueSchedule: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default": defaultValue}), + "max_frames": ("INT", {"default": 120.0, "min": 1.0, "max": 999999.0, "step": 1.0}), + "print_output": ("BOOLEAN", {"default": False})}} + + RETURN_TYPES = ("FLOAT", "INT") + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, text, max_frames, print_output): + t = batch_get_inbetweens(batch_parse_key_frames(text, max_frames), max_frames) + if print_output is True: + print("ValueSchedule: ", t) + return (t, list(map(int,t)),) + +class BatchValueScheduleLatentInput: + @classmethod + def INPUT_TYPES(s): + return {"required": {"text": ("STRING", {"multiline": True, "default": defaultValue}), + "num_latents": ("LATENT", ), + "print_output": ("BOOLEAN", {"default": False})}} + + RETURN_TYPES = ("FLOAT", "INT", "LATENT", ) + FUNCTION = "animate" + + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, text, num_latents, print_output): + num_elements = sum(tensor.size(0) for tensor in num_latents.values()) + max_frames = num_elements + t = batch_get_inbetweens(batch_parse_key_frames(text, max_frames), max_frames) + if print_output is True: + print("ValueSchedule: ", t) + return (t, list(map(int,t)), num_latents, ) + +# Expects a Batch Value Schedule list input, it exports an image batch with images taken from an input image batch +class ImageBatchFromValueSchedule: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "images": ("IMAGE",), + "values": ("FLOAT", { "default": 1.0, "min": -1.0, "max": 1.0, "label": "values" }), + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "animate" + CATEGORY = "FizzNodes 📅🅕🅝/BatchScheduleNodes" + + def animate(self, images, values): + values = [values] * n if isinstance(values, float) else values + min_value, max_value = min(values), max(values) + i = [(x - min_value) / (max_value - min_value) * (images.shape[0] - 1) for x in values] + return (images[i], ) diff --git a/custom_nodes/ComfyUI_FizzNodes/ValueFuncs.py b/custom_nodes/ComfyUI_FizzNodes/ValueFuncs.py new file mode 100644 index 0000000000000000000000000000000000000000..026cdab2bdb466185c12718b104661760faeeba1 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/ValueFuncs.py @@ -0,0 +1,113 @@ +import numexpr +import torch +import numpy as np +import pandas as pd +import re +import json + +from .ScheduleFuncs import check_is_number + + +def sanitize_value(value): + # Remove single quotes, double quotes, and parentheses + value = value.replace("'", "").replace('"', "").replace('(', "").replace(')', "") + return value + + +def get_inbetweens(key_frames, max_frames, integer=False, interp_method='Linear', is_single_string=False): + key_frame_series = pd.Series([np.nan for a in range(max_frames)]) + max_f = max_frames - 1 # needed for numexpr even though it doesn't look like it's in use. + value_is_number = False + for i in range(0, max_frames): + if i in key_frames: + value = key_frames[i] + value_is_number = check_is_number(sanitize_value(value)) + if value_is_number: # if it's only a number, leave the rest for the default interpolation + key_frame_series[i] = sanitize_value(value) + if not value_is_number: + t = i + # workaround for values formatted like 0:("I am test") //used for sampler schedules + key_frame_series[i] = numexpr.evaluate(value) if not is_single_string else sanitize_value(value) + elif is_single_string: # take previous string value and replicate it + key_frame_series[i] = key_frame_series[i - 1] + key_frame_series = key_frame_series.astype(float) if not is_single_string else key_frame_series # as string + + if interp_method == 'Cubic' and len(key_frames.items()) <= 3: + interp_method = 'Quadratic' + if interp_method == 'Quadratic' and len(key_frames.items()) <= 2: + interp_method = 'Linear' + + key_frame_series[0] = key_frame_series[key_frame_series.first_valid_index()] + key_frame_series[max_frames - 1] = key_frame_series[key_frame_series.last_valid_index()] + key_frame_series = key_frame_series.interpolate(method=interp_method.lower(), limit_direction='both') + + if integer: + return key_frame_series.astype(int) + return key_frame_series + + +def parse_key_frames(string, max_frames): + # because math functions (i.e. sin(t)) can utilize brackets + # it extracts the value in form of some stuff + # which has previously been enclosed with brackets and + # with a comma or end of line existing after the closing one + frames = dict() + for match_object in string.split(","): + frameParam = match_object.split(":") + max_f = max_frames - 1 # needed for numexpr even though it doesn't look like it's in use. + frame = int(sanitize_value(frameParam[0])) if check_is_number( + sanitize_value(frameParam[0].strip())) else int(numexpr.evaluate( + frameParam[0].strip().replace("'", "", 1).replace('"', "", 1)[::-1].replace("'", "", 1).replace('"', "", 1)[::-1])) + frames[frame] = frameParam[1].strip() + if frames == {} and len(string) != 0: + raise RuntimeError('Key Frame string not correctly formatted') + return frames + +def batch_get_inbetweens(key_frames, max_frames, integer=False, interp_method='Linear', is_single_string=False): + key_frame_series = pd.Series([np.nan for a in range(max_frames)]) + max_f = max_frames - 1 # needed for numexpr even though it doesn't look like it's in use. + value_is_number = False + for i in range(0, max_frames): + if i in key_frames: + value = str(key_frames[i]) # Convert to string to ensure it's treated as an expression + value_is_number = check_is_number(sanitize_value(value)) + if value_is_number: + key_frame_series[i] = sanitize_value(value) + if not value_is_number: + t = i + # workaround for values formatted like 0:("I am test") //used for sampler schedules + key_frame_series[i] = numexpr.evaluate(value) if not is_single_string else sanitize_value(value) + elif is_single_string: # take previous string value and replicate it + key_frame_series[i] = key_frame_series[i - 1] + key_frame_series = key_frame_series.astype(float) if not is_single_string else key_frame_series # as string + + if interp_method == 'Cubic' and len(key_frames.items()) <= 3: + interp_method = 'Quadratic' + if interp_method == 'Quadratic' and len(key_frames.items()) <= 2: + interp_method = 'Linear' + + key_frame_series[0] = key_frame_series[key_frame_series.first_valid_index()] + key_frame_series[max_frames - 1] = key_frame_series[key_frame_series.last_valid_index()] + key_frame_series = key_frame_series.interpolate(method=interp_method.lower(), limit_direction='both') + + if integer: + return key_frame_series.astype(int) + return key_frame_series + +def batch_parse_key_frames(string, max_frames): + # because math functions (i.e. sin(t)) can utilize brackets + # it extracts the value in form of some stuff + # which has previously been enclosed with brackets and + # with a comma or end of line existing after the closing one + string = re.sub(r',\s*$', '', string) + frames = dict() + for match_object in string.split(","): + frameParam = match_object.split(":") + max_f = max_frames - 1 # needed for numexpr even though it doesn't look like it's in use. + frame = int(sanitize_value(frameParam[0])) if check_is_number( + sanitize_value(frameParam[0].strip())) else int(numexpr.evaluate( + frameParam[0].strip().replace("'", "", 1).replace('"', "", 1)[::-1].replace("'", "", 1).replace('"', "",1)[::-1])) + frames[frame] = frameParam[1].strip() + if frames == {} and len(string) != 0: + raise RuntimeError('Key Frame string not correctly formatted') + return frames \ No newline at end of file diff --git a/custom_nodes/ComfyUI_FizzNodes/WaveNodes.py b/custom_nodes/ComfyUI_FizzNodes/WaveNodes.py new file mode 100644 index 0000000000000000000000000000000000000000..b79262a14cf1c7483110cc3ec9288d32e6278709 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/WaveNodes.py @@ -0,0 +1,189 @@ +import numpy as np + +class Lerp: + @classmethod + def INPUT_TYPES(s): + return {"required": {"num_Images": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT",) + FUNCTION = "lerp" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def lerp(self, num_Images, strength, current_frame): + step = strength/num_Images + output = strength - (step * current_frame) + return (output, int(output),) + +class SinWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "y_translation": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT","INT",) + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, y_translation, current_frame): + output = (y_translation+(amplitude*(np.sin((2*np.pi*current_frame/phase-x_translation))))) + print(output) + return (output, int(output),) + +class InvSinWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "y_translation": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT") + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, y_translation, current_frame): + output = (y_translation+(amplitude*-(np.sin(-1*(2*np.pi*current_frame/phase-x_translation))))) + print(output) + return (output, int(output),) + +class CosWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "y_translation": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT", ) + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, y_translation, current_frame): + output = (y_translation+(amplitude*(np.cos((2*np.pi*current_frame/phase-x_translation))))) + print(output) + return (output, int(output),) + +class InvCosWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "y_translation": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT", ) + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, y_translation, current_frame): + output = (y_translation+(amplitude*-(np.cos(-1*(2*np.pi*current_frame/phase-x_translation))))) + print(output) + return (output, int(output),) + +class SquareWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "y_translation": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT",) + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, y_translation, current_frame): + output = (y_translation+(amplitude*0**0**(0-np.sin((np.pi*current_frame/phase-x_translation))))) + print(output) + return (output, int(output),) + +class SawtoothWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "step_increment": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "start_value": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT", ) + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, step_increment, x_translation, start_value, current_frame): + output = (start_value+(step_increment*(current_frame%phase)-x_translation)) + print(output) + return (output, int(output),) + +class TriangleWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "y_translation": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT",) + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, y_translation, current_frame): + output = (y_translation+amplitude/np.pi*(np.arcsin(np.sin(2*np.pi/phase*current_frame-x_translation)))) + print(output) + return (output, int(output),) + +class AbsCosWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "max_value": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT") + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, max_value, current_frame): + output = (max_value-(np.abs(np.cos(current_frame/phase))*amplitude)) + print(output) + return (output, int(output),) + +class AbsSinWave: + @classmethod + def INPUT_TYPES(s): + return {"required": {"phase": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "amplitude": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.1}), + "x_translation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + "max_value": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 9999.0, "step": 0.05}), + "current_frame": ("INT", {"default": 1.0, "min": 0.0, "max": 9999.0, "step": 1.0}), + }} + RETURN_TYPES = ("FLOAT", "INT") + FUNCTION = "Wave" + + CATEGORY = "FizzNodes 📅🅕🅝/WaveNodes" + + def Wave(self, phase, amplitude, x_translation, max_value, current_frame): + output = (max_value-(np.abs(np.sin(current_frame/phase))*amplitude)) + print(output) + return (output, int(output),) \ No newline at end of file diff --git a/custom_nodes/ComfyUI_FizzNodes/__init__.py b/custom_nodes/ComfyUI_FizzNodes/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..706b885dc2bdf19e58aee42f45201a99c1577e71 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/__init__.py @@ -0,0 +1,139 @@ +# Made by Davemane42#0042 for ComfyUI +import os +import subprocess +import importlib.util +import sys +import filecmp +import shutil + +import __main__ + +python = sys.executable + + +extentions_folder = os.path.join(os.path.dirname(os.path.realpath(__main__.__file__)), + "web" + os.sep + "extensions" + os.sep + "FizzleDorf") +javascript_folder = os.path.join(os.path.dirname(os.path.realpath(__file__)), "javascript") + +if not os.path.exists(extentions_folder): + print('Making the "web\extensions\FizzleDorf" folder') + os.makedirs(extentions_folder) + +result = filecmp.dircmp(javascript_folder, extentions_folder) + +if result.left_only or result.diff_files: + print('Update to javascripts files detected') + file_list = list(result.left_only) + file_list.extend(x for x in result.diff_files if x not in file_list) + + for file in file_list: + print(f'Copying {file} to extensions folder') + src_file = os.path.join(javascript_folder, file) + dst_file = os.path.join(extentions_folder, file) + if os.path.exists(dst_file): + os.remove(dst_file) + #print("disabled") + shutil.copy(src_file, dst_file) + + +def is_installed(package, package_overwrite=None): + try: + spec = importlib.util.find_spec(package) + except ModuleNotFoundError: + pass + + package = package_overwrite or package + + if spec is None: + print(f"Installing {package}...") + command = f'"{python}" -m pip install {package}' + + result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ) + + if result.returncode != 0: + print(f"Couldn't install\nCommand: {command}\nError code: {result.returncode}") + +from .WaveNodes import Lerp, SinWave, InvSinWave, CosWave, InvCosWave, SquareWave, SawtoothWave, TriangleWave, AbsCosWave, AbsSinWave +from .ScheduledNodes import ( + ValueSchedule, PromptSchedule, PromptScheduleNodeFlow, PromptScheduleNodeFlowEnd, PromptScheduleEncodeSDXL, + StringSchedule, BatchPromptSchedule, BatchValueSchedule, BatchPromptScheduleEncodeSDXL, BatchStringSchedule, + BatchValueScheduleLatentInput, BatchPromptScheduleEncodeSDXLLatentInput, BatchPromptScheduleLatentInput, + ImageBatchFromValueSchedule + #, BatchPromptScheduleNodeFlowEnd #, BatchGLIGENSchedule +) +from .FrameNodes import FrameConcatenate, InitNodeFrame, NodeFrame, StringConcatenate +from .HelperNodes import ConcatStringSingle, convertKeyframeKeysToBatchKeys, CalculateFrameOffset + +NODE_CLASS_MAPPINGS = { + "Lerp": Lerp, + "SinWave": SinWave, + "InvSinWave": InvSinWave, + "CosWave": CosWave, + "InvCosWave": InvCosWave, + "SquareWave":SquareWave, + "SawtoothWave": SawtoothWave, + "TriangleWave": TriangleWave, + "AbsCosWave": AbsCosWave, + "AbsSinWave": AbsSinWave, + "PromptSchedule": PromptSchedule, + "ValueSchedule": ValueSchedule, + "PromptScheduleNodeFlow": PromptScheduleNodeFlow, + "PromptScheduleNodeFlowEnd": PromptScheduleNodeFlowEnd, + "PromptScheduleEncodeSDXL":PromptScheduleEncodeSDXL, + "StringSchedule":StringSchedule, + "BatchPromptSchedule": BatchPromptSchedule, + "BatchValueSchedule": BatchValueSchedule, + "BatchPromptScheduleEncodeSDXL": BatchPromptScheduleEncodeSDXL, + "BatchStringSchedule": BatchStringSchedule, + "BatchValueScheduleLatentInput": BatchValueScheduleLatentInput, + "BatchPromptScheduleSDXLLatentInput":BatchPromptScheduleEncodeSDXLLatentInput, + "BatchPromptScheduleLatentInput":BatchPromptScheduleLatentInput, + "ImageBatchFromValueSchedule":ImageBatchFromValueSchedule, + #"BatchPromptScheduleNodeFlowEnd":BatchPromptScheduleNodeFlowEnd, + #"BatchGLIGENSchedule": BatchGLIGENSchedule, + + "StringConcatenate":StringConcatenate, + "Init FizzFrame":InitNodeFrame, + "FizzFrame":NodeFrame, + "FizzFrameConcatenate":FrameConcatenate, + + "ConcatStringSingle": ConcatStringSingle, + "convertKeyframeKeysToBatchKeys": convertKeyframeKeysToBatchKeys, + "CalculateFrameOffset":CalculateFrameOffset, +} + +NODE_DISPLAY_NAME_MAPPINGS = { + "Lerp": "Lerp 📅🅕🅝", + "SinWave": "SinWave 📅🅕🅝", + "InvSinWave": "InvSinWave 📅🅕🅝", + "CosWave": "CosWave 📅🅕🅝", + "InvCosWave": "InvCosWave 📅🅕🅝", + "SquareWave":"SquareWave 📅🅕🅝", + "SawtoothWave": "SawtoothWave 📅🅕🅝", + "TriangleWave": "TriangleWave 📅🅕🅝", + "AbsCosWave": "AbsCosWave 📅🅕🅝", + "AbsSinWave": "AbsSinWave 📅🅕🅝", + "PromptSchedule": "Prompt Schedule 📅🅕🅝", + "ValueSchedule": "Value Schedule 📅🅕🅝", + "PromptScheduleNodeFlow": "Prompt Schedule NodeFlow 📅🅕🅝", + "PromptScheduleNodeFlowEnd": "Prompt Schedule NodeFlow End 📅🅕🅝", + "StringSchedule":"String Schedule 📅🅕🅝", + "StringConcatenate":"String Concatenate 📅🅕🅝", + "Init FizzFrame":"Init Node Frame 📅🅕🅝", + "FizzFrame":"Node Frame 📅🅕🅝", + "FizzFrameConcatenate":"Frame Concatenate 📅🅕🅝", + "BatchPromptSchedule": "Batch Prompt Schedule 📅🅕🅝", + "BatchValueSchedule": "Batch Value Schedule 📅🅕🅝", + "PromptScheduleEncodeSDXL": "Prompt Schedule SDXL 📅🅕🅝", + "BatchPromptScheduleEncodeSDXL": "Batch Prompt Schedule SDXL 📅🅕🅝", + "BatchStringSchedule": "Batch String Schedule 📅🅕🅝", + "BatchValueScheduleLatentInput": "Batch Value Schedule (Latent Input) 📅🅕🅝", + "BatchPromptScheduleSDXLLatentInput": "Batch Prompt Schedule SDXL (Latent Input) 📅🅕🅝", + "BatchPromptScheduleLatentInput": "Batch Prompt Schedule (Latent Input) 📅🅕🅝", + "ImageBatchFromValueSchedule":"Image Batch From Value Schedule 📅🅕🅝", + "ConcatStringSingle": "Concat String (Single) 📅🅕🅝", + "convertKeyframeKeysToBatchKeys":"Keyframe Keys To Batch Keys 📅🅕🅝", + "SelectFrameNumber":"Select Frame Number 📅🅕🅝", + "CalculateFrameOffset":"Calculate Frame Offset 📅🅕🅝", +} +print('\033[34mFizzleDorf Custom Nodes: \033[92mLoaded\033[0m') diff --git a/custom_nodes/ComfyUI_FizzNodes/javascript/Folder here to satisfy init, eventually I'll have stuff in here..txt b/custom_nodes/ComfyUI_FizzNodes/javascript/Folder here to satisfy init, eventually I'll have stuff in here..txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI_FizzNodes/requirements.txt b/custom_nodes/ComfyUI_FizzNodes/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..7158bb086a7262e9ff6129e3036b195cd8f10de7 --- /dev/null +++ b/custom_nodes/ComfyUI_FizzNodes/requirements.txt @@ -0,0 +1,2 @@ +pandas +numexpr diff --git a/custom_nodes/ComfyUI_Noise/LICENSE b/custom_nodes/ComfyUI_Noise/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..f288702d2fa16d3cdf0035b15a9fcbc552cd88e7 --- /dev/null +++ b/custom_nodes/ComfyUI_Noise/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/custom_nodes/ComfyUI_Noise/README.md b/custom_nodes/ComfyUI_Noise/README.md new file mode 100644 index 0000000000000000000000000000000000000000..7402929546b9cc995e7346f96f5ee5724b5ab9e6 --- /dev/null +++ b/custom_nodes/ComfyUI_Noise/README.md @@ -0,0 +1,88 @@ +# ComfyUI Noise + +This repo contains 6 nodes for [ComfyUI](https://github.com/comfyanonymous/ComfyUI) that allows for more control and flexibility over the noise. This allows e.g. for workflows with small variations to generations or finding the accompanying noise to some input image and prompt. + +## Nodes + +### Noisy Latent Image: +This node lets you generate noise, you can find this node under `latent>noise` and it the following settings: +- **source**: where to generate the noise, currently supports GPU and CPU. +- **seed**: the noise seed. +- **width**: image width. +- **height**: image height. +- **batch_size**: batch size. + +### Duplicate Batch Index: +The functionality of this node has been moved to core, please use: `Latent>Batch>Repeat Latent Batch` and `Latent>Batch>Latent From Batch` instead. + +This node lets you duplicate a certain sample in the batch, this can be used to duplicate e.g. encoded images but also noise generated from the node listed above. You can find this node under `latent` and it has the following settings: +- **latents**: the latents. +- **batch_index**: which sample in the latents to duplicate. +- **batch_size**: the new batch size, (i.e. how many times to duplicate the sample). + +### Slerp Latents: +This node lets you mix two latents together. Both of the input latents must share the same dimensions or the node will ignore the mix factor and instead output the top slot. When it comes to other things attached to the latents such as e.g. masks, only those of the top slot are passed on. You can find this node under `latent` and it comes with the following inputs: +- **latents1**: first batch of latents. +- **latents2**: second batch of latents. This input is optional. +- **mask**: determines where in the latents to slerp. This input is optional +- **factor**: how much of the second batch of latents should be slerped into the first. + +### Get Sigma: +This node can be used to calculate the amount of noise a sampler expects when it starts denoising. You can find this node under `latent>noise` and it comes with the following inputs and settings: +- **model**: The model for which to calculate the sigma. +- **sampler_name**: the name of the sampler for which to calculate the sigma. +- **scheduler**: the type of schedule used in the sampler +- **steps**: the total number of steps in the schedule +- **start_at_step**: the start step of the sampler, i.e. how much noise it expects in the input image +- **end_at_step**: the current end step of the previous sampler, i.e. how much noise already is in the image. + +Most of the time you'd simply want to keep `start_at_step` at zero, and `end_at_step` at `steps`, but if you'd want to re-inject some noise in between two samplers, e.g. one sampler that denoises from 0 to 15, and a second that denoises from 10 to 20, you'd want to use a `start_at_step` 10 and an `end_at_step` of 15. So that the image we get, which is at step 15, can be noised back down to step 10, so the second sampler can bring it to 20. Take note that the Advanced Ksampler has a settings for `add_noise` and `return_with_leftover_noise` which when working with these nodes we both want to have disabled. + +### Inject Noise: +This node lets you actually inject the noise into an image latent, you can find this node under `latent>noise` and it comes with the following inputs: +- **latents**: The latents to inject the noise into. +- **noise**: The noise. This input is optional +- **mask**: determines where to inject noise. This input is optional +- **strength**: The strength of the noise. Note that we can use the node above to calculate for us an appropriate strength value. + +### Unsampler: +This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt. You can find this node under `sampling` and it takes the following inputs and settings: +- **model**: The model to target. +- **steps**: number of steps to noise. +- **end_step**: to what step to travel back to. +- **cfg**: classifier free guidance scale. +- **sampler_name**: The name of the sampling technique to use. +- **scheduler**: The type of schedule to use. +- **normalize**: whether to normalize the noise before output. Useful when passing it on to an Inject Noise node which expects normalizes noise. +- **positive**: Positive prompt. +- **negative**: Negative prompt. +- **latent_image**: The image to renoise. + +When trying to reconstruct the target image as faithful as possible this works best if both the unsampler and sampler use a cfg scale close to 1.0 and similar number of steps. But it is fun and worth it to play around with these settings to get a better intuition of the results. This node let's you do similar things the A1111 [img2img alternative](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#img2img-alternative-test) script does + +## Examples + +here are some examples that show how to use the nodes above. Workflows to these examples can be found in the `example_workflow` folder. + +
+ +generating variations + + +![screenshot of a workflow that demos generating small variations to a given seed](https://github.com/BlenderNeko/ComfyUI_noise/blob/master/examples/example_variation.png) + +To create small variations to a given generation we can do the following: We generate the noise of the seed that we're interested using a `Noisy Latent Image` node, we then create an entire batch of these with a `Duplicate Batch Index` node. Note that if we were doing this for img2img we can use this same node to duplicate the image latents. Next we generate some more noise, but this time we generate a batch of noise rather than a single sample. We then Slerp this newly created noise into the other one with a `Slerp Latents` node. To figure out the required strength for injecting this noise we use a `Get Sigma` node. And finally we inject the slerped noise into a batch of empty latents with a `Inject Noise` node. Take note that we use an advanced Ksampler with the `add_noise` setting disabled + +
+ +
+ +"unsampling" + + +![screenshot of a workflow that demos generating small variations to a given seed](https://github.com/BlenderNeko/ComfyUI_noise/blob/master/examples/example_unsample.png) + +To get the noise that recreates a certain image, we first load an image. Then we use the `Unsampler` node with a low cfg value. To check if this is working we then take the resulting noise and feed it back into an advanced ksampler with the `add_noise` setting disabled, and a cfg of 1.0. + +
+ diff --git a/custom_nodes/ComfyUI_Noise/__init__.py b/custom_nodes/ComfyUI_Noise/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..d721463be66961a2f388b3a756760d167ea5d510 --- /dev/null +++ b/custom_nodes/ComfyUI_Noise/__init__.py @@ -0,0 +1,3 @@ +from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS + +__all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS'] \ No newline at end of file diff --git a/custom_nodes/ComfyUI_Noise/example_workflows/unsample_example.json b/custom_nodes/ComfyUI_Noise/example_workflows/unsample_example.json new file mode 100644 index 0000000000000000000000000000000000000000..86ebae968a66c3450636d45465de50d9a628e6ce --- /dev/null +++ b/custom_nodes/ComfyUI_Noise/example_workflows/unsample_example.json @@ -0,0 +1,698 @@ +{ + "last_node_id": 27, + "last_link_id": 66, + "nodes": [ + { + "id": 23, + "type": "Reroute", + "pos": [ + 228, + 840 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 50 + } + ], + "outputs": [ + { + "name": "", + "type": "VAE", + "links": [ + 51, + 52 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 24, + "type": "Reroute", + "pos": [ + 400, + 740 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 2, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 53 + } + ], + "outputs": [ + { + "name": "", + "type": "MODEL", + "links": [ + 54 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 8, + "type": "VAEDecode", + "pos": [ + 970, + 640 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 44 + }, + { + "name": "vae", + "type": "VAE", + "link": 52 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 9 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + }, + { + "id": 9, + "type": "SaveImage", + "pos": [ + 1280, + 681 + ], + "size": { + "0": 367.50909423828125, + "1": 383.8414306640625 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 9 + } + ], + "properties": {}, + "widgets_values": [ + "ComfyUI" + ] + }, + { + "id": 7, + "type": "CLIPTextEncode", + "pos": [ + -64, + 642 + ], + "size": { + "0": 425.27801513671875, + "1": 180.6060791015625 + }, + "flags": {}, + "order": 4, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 5 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 56 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "text, watermark" + ] + }, + { + "id": 6, + "type": "CLIPTextEncode", + "pos": [ + -68, + 432 + ], + "size": { + "0": 422.84503173828125, + "1": 164.31304931640625 + }, + "flags": {}, + "order": 3, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 3 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 59 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "beautiful scenery nature glass bottle landscape, , purple galaxy bottle," + ] + }, + { + "id": 19, + "type": "LoadImage", + "pos": [ + -124, + 906 + ], + "size": { + "0": 434.40911865234375, + "1": 440.44140625 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 34 + ], + "slot_index": 0 + }, + { + "name": "MASK", + "type": "MASK", + "links": null + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "example.png", + "image" + ] + }, + { + "id": 12, + "type": "KSamplerAdvanced", + "pos": [ + 950, + 740 + ], + "size": { + "0": 315, + "1": 334 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 54 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 61 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 58 + }, + { + "name": "latent_image", + "type": "LATENT", + "link": 66 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 44 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvanced" + }, + "widgets_values": [ + "disable", + 0, + "fixed", + 25, + 1, + "dpmpp_2m", + "karras", + 0, + 25, + "disable" + ] + }, + { + "id": 26, + "type": "Reroute", + "pos": [ + 450, + 670 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 59 + } + ], + "outputs": [ + { + "name": "", + "type": "CONDITIONING", + "links": [ + 61, + 62 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 25, + "type": "Reroute", + "pos": [ + 430, + 700 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 56 + } + ], + "outputs": [ + { + "name": "", + "type": "CONDITIONING", + "links": [ + 58, + 63 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 20, + "type": "VAEEncode", + "pos": [ + 354, + 894 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "pixels", + "type": "IMAGE", + "link": 34 + }, + { + "name": "vae", + "type": "VAE", + "link": 51 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 64 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEEncode" + } + }, + { + "id": 4, + "type": "CheckpointLoaderSimple", + "pos": [ + -635, + 661 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 53, + 65 + ], + "slot_index": 0 + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 3, + 5 + ], + "slot_index": 1 + }, + { + "name": "VAE", + "type": "VAE", + "links": [ + 50 + ], + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "v1-5-pruned-emaonly.safetensors" + ] + }, + { + "id": 27, + "type": "BNK_Unsampler", + "pos": [ + 608, + 857 + ], + "size": { + "0": 315, + "1": 214 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 65 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 62 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 63 + }, + { + "name": "latent_image", + "type": "LATENT", + "link": 64 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 66 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_Unsampler" + }, + "widgets_values": [ + 25, + 0, + 1, + "dpmpp_2m", + "karras" + ] + } + ], + "links": [ + [ + 3, + 4, + 1, + 6, + 0, + "CLIP" + ], + [ + 5, + 4, + 1, + 7, + 0, + "CLIP" + ], + [ + 9, + 8, + 0, + 9, + 0, + "IMAGE" + ], + [ + 34, + 19, + 0, + 20, + 0, + "IMAGE" + ], + [ + 44, + 12, + 0, + 8, + 0, + "LATENT" + ], + [ + 50, + 4, + 2, + 23, + 0, + "*" + ], + [ + 51, + 23, + 0, + 20, + 1, + "VAE" + ], + [ + 52, + 23, + 0, + 8, + 1, + "VAE" + ], + [ + 53, + 4, + 0, + 24, + 0, + "*" + ], + [ + 54, + 24, + 0, + 12, + 0, + "MODEL" + ], + [ + 56, + 7, + 0, + 25, + 0, + "*" + ], + [ + 58, + 25, + 0, + 12, + 2, + "CONDITIONING" + ], + [ + 59, + 6, + 0, + 26, + 0, + "*" + ], + [ + 61, + 26, + 0, + 12, + 1, + "CONDITIONING" + ], + [ + 62, + 26, + 0, + 27, + 1, + "CONDITIONING" + ], + [ + 63, + 25, + 0, + 27, + 2, + "CONDITIONING" + ], + [ + 64, + 20, + 0, + 27, + 3, + "LATENT" + ], + [ + 65, + 4, + 0, + 27, + 0, + "MODEL" + ], + [ + 66, + 27, + 0, + 12, + 3, + "LATENT" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI_Noise/example_workflows/variations_example.json b/custom_nodes/ComfyUI_Noise/example_workflows/variations_example.json new file mode 100644 index 0000000000000000000000000000000000000000..a9a75e41d34ecaeeb3a2d7f19f1c117a9ff4103d --- /dev/null +++ b/custom_nodes/ComfyUI_Noise/example_workflows/variations_example.json @@ -0,0 +1,868 @@ +{ + "last_node_id": 39, + "last_link_id": 84, + "nodes": [ + { + "id": 26, + "type": "Reroute", + "pos": [ + 450, + 670 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 59 + } + ], + "outputs": [ + { + "name": "", + "type": "CONDITIONING", + "links": [ + 61 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 25, + "type": "Reroute", + "pos": [ + 430, + 700 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 56 + } + ], + "outputs": [ + { + "name": "", + "type": "CONDITIONING", + "links": [ + 58 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 24, + "type": "Reroute", + "pos": [ + 400, + 740 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 4, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 53 + } + ], + "outputs": [ + { + "name": "", + "type": "MODEL", + "links": [ + 54 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 7, + "type": "CLIPTextEncode", + "pos": [ + -64, + 642 + ], + "size": { + "0": 425.27801513671875, + "1": 180.6060791015625 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 5 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 56 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "text, watermark" + ] + }, + { + "id": 6, + "type": "CLIPTextEncode", + "pos": [ + -68, + 432 + ], + "size": { + "0": 422.84503173828125, + "1": 164.31304931640625 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "clip", + "type": "CLIP", + "link": 3 + } + ], + "outputs": [ + { + "name": "CONDITIONING", + "type": "CONDITIONING", + "links": [ + 59 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "CLIPTextEncode" + }, + "widgets_values": [ + "beautiful scenery nature glass bottle landscape, , purple galaxy bottle," + ] + }, + { + "id": 12, + "type": "KSamplerAdvanced", + "pos": [ + 835, + 887 + ], + "size": { + "0": 315, + "1": 334 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 54 + }, + { + "name": "positive", + "type": "CONDITIONING", + "link": 61 + }, + { + "name": "negative", + "type": "CONDITIONING", + "link": 58 + }, + { + "name": "latent_image", + "type": "LATENT", + "link": 84 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 44 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "KSamplerAdvanced" + }, + "widgets_values": [ + "disable", + 0, + "fixed", + 25, + 8, + "dpmpp_2m", + "karras", + 0, + 25, + "disable" + ] + }, + { + "id": 23, + "type": "Reroute", + "pos": [ + -230, + 1632 + ], + "size": [ + 75, + 26 + ], + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "", + "type": "*", + "link": 50 + } + ], + "outputs": [ + { + "name": "", + "type": "VAE", + "links": [ + 52 + ], + "slot_index": 0 + } + ], + "properties": { + "showOutputText": false, + "horizontal": false + } + }, + { + "id": 8, + "type": "VAEDecode", + "pos": [ + 1183, + 1133 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "samples", + "type": "LATENT", + "link": 44 + }, + { + "name": "vae", + "type": "VAE", + "link": 52 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 9 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "VAEDecode" + } + }, + { + "id": 9, + "type": "SaveImage", + "pos": [ + 771, + 1259 + ], + "size": { + "0": 494.55535888671875, + "1": 524.3897705078125 + }, + "flags": {}, + "order": 16, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 9 + } + ], + "properties": {}, + "widgets_values": [ + "ComfyUI" + ] + }, + { + "id": 4, + "type": "CheckpointLoaderSimple", + "pos": [ + -635, + 661 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "MODEL", + "type": "MODEL", + "links": [ + 53, + 74 + ], + "slot_index": 0 + }, + { + "name": "CLIP", + "type": "CLIP", + "links": [ + 3, + 5 + ], + "slot_index": 1 + }, + { + "name": "VAE", + "type": "VAE", + "links": [ + 50 + ], + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "CheckpointLoaderSimple" + }, + "widgets_values": [ + "v1-5-pruned-emaonly.safetensors" + ] + }, + { + "id": 34, + "type": "BNK_NoisyLatentImage", + "pos": [ + -216, + 980 + ], + "size": { + "0": 315, + "1": 178 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 75 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_NoisyLatentImage" + }, + "widgets_values": [ + "CPU", + 0, + "fixed", + 512, + 512, + 1 + ] + }, + { + "id": 35, + "type": "BNK_NoisyLatentImage", + "pos": [ + -217, + 1197 + ], + "size": { + "0": 315, + "1": 178 + }, + "flags": {}, + "order": 2, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 77 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_NoisyLatentImage" + }, + "widgets_values": [ + "CPU", + 1, + "fixed", + 512, + 512, + 4 + ] + }, + { + "id": 37, + "type": "BNK_DuplicateBatchIndex", + "pos": [ + 134, + 1012 + ], + "size": { + "0": 315, + "1": 82 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "latents", + "type": "LATENT", + "link": 75 + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 76 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_DuplicateBatchIndex" + }, + "widgets_values": [ + 0, + 4 + ] + }, + { + "id": 38, + "type": "BNK_SlerpLatent", + "pos": [ + 137, + 1144 + ], + "size": { + "0": 315, + "1": 98 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "latents1", + "type": "LATENT", + "link": 76 + }, + { + "name": "latents2", + "type": "LATENT", + "link": 77 + }, + { + "name": "mask", + "type": "MASK", + "link": null + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 81 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_SlerpLatent" + }, + "widgets_values": [ + 0.05 + ] + }, + { + "id": 39, + "type": "BNK_InjectNoise", + "pos": [ + 476, + 1131 + ], + "size": [ + 315, + 98 + ], + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "latents", + "type": "LATENT", + "link": 82 + }, + { + "name": "noise", + "type": "LATENT", + "link": 81 + }, + { + "name": "mask", + "type": "MASK", + "link": null + }, + { + "name": "strength", + "type": "FLOAT", + "link": 80, + "widget": { + "name": "strength", + "config": [ + "FLOAT", + { + "default": 1, + "min": 0, + "max": 20, + "step": 0.01 + } + ] + } + } + ], + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 84 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_InjectNoise" + }, + "widgets_values": [ + 1 + ] + }, + { + "id": 33, + "type": "EmptyLatentImage", + "pos": [ + 474, + 985 + ], + "size": { + "0": 315, + "1": 106 + }, + "flags": {}, + "order": 3, + "mode": 0, + "outputs": [ + { + "name": "LATENT", + "type": "LATENT", + "links": [ + 82 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "EmptyLatentImage" + }, + "widgets_values": [ + 512, + 512, + 4 + ] + }, + { + "id": 36, + "type": "BNK_GetSigma", + "pos": [ + -221, + 1420 + ], + "size": { + "0": 315, + "1": 154 + }, + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "model", + "type": "MODEL", + "link": 74 + } + ], + "outputs": [ + { + "name": "FLOAT", + "type": "FLOAT", + "links": [ + 80 + ], + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "BNK_GetSigma" + }, + "widgets_values": [ + "dpmpp_2m", + "karras", + 25, + 0, + 25 + ] + } + ], + "links": [ + [ + 3, + 4, + 1, + 6, + 0, + "CLIP" + ], + [ + 5, + 4, + 1, + 7, + 0, + "CLIP" + ], + [ + 9, + 8, + 0, + 9, + 0, + "IMAGE" + ], + [ + 44, + 12, + 0, + 8, + 0, + "LATENT" + ], + [ + 50, + 4, + 2, + 23, + 0, + "*" + ], + [ + 52, + 23, + 0, + 8, + 1, + "VAE" + ], + [ + 53, + 4, + 0, + 24, + 0, + "*" + ], + [ + 54, + 24, + 0, + 12, + 0, + "MODEL" + ], + [ + 56, + 7, + 0, + 25, + 0, + "*" + ], + [ + 58, + 25, + 0, + 12, + 2, + "CONDITIONING" + ], + [ + 59, + 6, + 0, + 26, + 0, + "*" + ], + [ + 61, + 26, + 0, + 12, + 1, + "CONDITIONING" + ], + [ + 74, + 4, + 0, + 36, + 0, + "MODEL" + ], + [ + 75, + 34, + 0, + 37, + 0, + "LATENT" + ], + [ + 76, + 37, + 0, + 38, + 0, + "LATENT" + ], + [ + 77, + 35, + 0, + 38, + 1, + "LATENT" + ], + [ + 80, + 36, + 0, + 39, + 3, + "FLOAT" + ], + [ + 81, + 38, + 0, + 39, + 1, + "LATENT" + ], + [ + 82, + 33, + 0, + 39, + 0, + "LATENT" + ], + [ + 84, + 39, + 0, + 12, + 3, + "LATENT" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/custom_nodes/ComfyUI_Noise/examples/example_unsample.png b/custom_nodes/ComfyUI_Noise/examples/example_unsample.png new file mode 100644 index 0000000000000000000000000000000000000000..6296c1d5490484cb7d183ca4974689d23b2bd695 Binary files /dev/null and b/custom_nodes/ComfyUI_Noise/examples/example_unsample.png differ diff --git a/custom_nodes/ComfyUI_Noise/examples/example_variation.png b/custom_nodes/ComfyUI_Noise/examples/example_variation.png new file mode 100644 index 0000000000000000000000000000000000000000..44d9a3f5424d9d8db31c090ba031385058cff69b Binary files /dev/null and b/custom_nodes/ComfyUI_Noise/examples/example_variation.png differ diff --git a/custom_nodes/ComfyUI_Noise/nodes.py b/custom_nodes/ComfyUI_Noise/nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..cf412da1eb04d4ba22a5911e75c6b33b1da749dc --- /dev/null +++ b/custom_nodes/ComfyUI_Noise/nodes.py @@ -0,0 +1,265 @@ +import torch + +import os +import sys + +sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), "comfy")) + +import comfy.model_management +import comfy.sample + +MAX_RESOLUTION=8192 + +def prepare_mask(mask, shape): + mask = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(shape[2], shape[3]), mode="bilinear") + mask = mask.expand((-1,shape[1],-1,-1)) + if mask.shape[0] < shape[0]: + mask = mask.repeat((shape[0] -1) // mask.shape[0] + 1, 1, 1, 1)[:shape[0]] + return mask + +class NoisyLatentImage: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "source":(["CPU", "GPU"], ), + "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "width": ("INT", {"default": 512, "min": 64, "max": MAX_RESOLUTION, "step": 8}), + "height": ("INT", {"default": 512, "min": 64, "max": MAX_RESOLUTION, "step": 8}), + "batch_size": ("INT", {"default": 1, "min": 1, "max": 64}), + }} + RETURN_TYPES = ("LATENT",) + FUNCTION = "create_noisy_latents" + + CATEGORY = "latent/noise" + + def create_noisy_latents(self, source, seed, width, height, batch_size): + torch.manual_seed(seed) + if source == "CPU": + device = "cpu" + else: + device = comfy.model_management.get_torch_device() + noise = torch.randn((batch_size, 4, height // 8, width // 8), dtype=torch.float32, device=device).cpu() + return ({"samples":noise}, ) + +class DuplicateBatchIndex: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "latents":("LATENT",), + "batch_index": ("INT", {"default": 0, "min": 0, "max": 63}), + "batch_size": ("INT", {"default": 1, "min": 1, "max": 64}), + }} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "duplicate_index" + + CATEGORY = "latent" + + def duplicate_index(self, latents, batch_index, batch_size): + s = latents.copy() + batch_index = min(s["samples"].shape[0] - 1, batch_index) + target = s["samples"][batch_index:batch_index + 1].clone() + target = target.repeat((batch_size,1,1,1)) + s["samples"] = target + return (s,) + +# from https://discuss.pytorch.org/t/help-regarding-slerp-function-for-generative-model-sampling/32475 +def slerp(val, low, high): + dims = low.shape + + #flatten to batches + low = low.reshape(dims[0], -1) + high = high.reshape(dims[0], -1) + + low_norm = low/torch.norm(low, dim=1, keepdim=True) + high_norm = high/torch.norm(high, dim=1, keepdim=True) + + # in case we divide by zero + low_norm[low_norm != low_norm] = 0.0 + high_norm[high_norm != high_norm] = 0.0 + + omega = torch.acos((low_norm*high_norm).sum(1)) + so = torch.sin(omega) + res = (torch.sin((1.0-val)*omega)/so).unsqueeze(1)*low + (torch.sin(val*omega)/so).unsqueeze(1) * high + return res.reshape(dims) + +class LatentSlerp: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "latents1":("LATENT",), + "factor": ("FLOAT", {"default": .5, "min": 0.0, "max": 1.0, "step": 0.01}), + }, + "optional" :{ + "latents2":("LATENT",), + "mask": ("MASK", ), + }} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "slerp_latents" + + CATEGORY = "latent" + + def slerp_latents(self, latents1, factor, latents2=None, mask=None): + s = latents1.copy() + if latents2 is None: + return (s,) + if latents1["samples"].shape != latents2["samples"].shape: + print("warning, shapes in LatentSlerp not the same, ignoring") + return (s,) + slerped = slerp(factor, latents1["samples"].clone(), latents2["samples"].clone()) + if mask is not None: + mask = prepare_mask(mask, slerped.shape) + slerped = mask * slerped + (1-mask) * latents1["samples"] + s["samples"] = slerped + return (s,) + +class GetSigma: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "model": ("MODEL",), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "steps": ("INT", {"default": 10000, "min": 0, "max": 10000}), + "start_at_step": ("INT", {"default": 0, "min": 0, "max": 10000}), + "end_at_step": ("INT", {"default": 10000, "min": 1, "max": 10000}), + }} + + RETURN_TYPES = ("FLOAT",) + FUNCTION = "calc_sigma" + + CATEGORY = "latent/noise" + + def calc_sigma(self, model, sampler_name, scheduler, steps, start_at_step, end_at_step): + device = comfy.model_management.get_torch_device() + end_at_step = min(steps, end_at_step) + start_at_step = min(start_at_step, end_at_step) + real_model = None + comfy.model_management.load_model_gpu(model) + real_model = model.model + sampler = comfy.samplers.KSampler(real_model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=1.0, model_options=model.model_options) + sigmas = sampler.sigmas + sigma = sigmas[start_at_step] - sigmas[end_at_step] + sigma /= model.model.latent_format.scale_factor + return (sigma.cpu().numpy(),) + +class InjectNoise: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "latents":("LATENT",), + + "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 200.0, "step": 0.01}), + }, + "optional":{ + "noise": ("LATENT",), + "mask": ("MASK", ), + }} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "inject_noise" + + CATEGORY = "latent/noise" + + def inject_noise(self, latents, strength, noise=None, mask=None): + s = latents.copy() + if noise is None: + return (s,) + if latents["samples"].shape != noise["samples"].shape: + print("warning, shapes in InjectNoise not the same, ignoring") + return (s,) + noised = s["samples"].clone() + noise["samples"].clone() * strength + if mask is not None: + mask = prepare_mask(mask, noised.shape) + noised = mask * noised + (1-mask) * latents["samples"] + s["samples"] = noised + return (s,) + +class Unsampler: + @classmethod + def INPUT_TYPES(s): + return {"required": + {"model": ("MODEL",), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "end_at_step": ("INT", {"default": 0, "min": 0, "max": 10000}), + "cfg": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 100.0}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "normalize": (["disable", "enable"], ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "latent_image": ("LATENT", ), + }} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "unsampler" + + CATEGORY = "sampling" + + def unsampler(self, model, cfg, sampler_name, steps, end_at_step, scheduler, normalize, positive, negative, latent_image): + normalize = normalize == "enable" + device = comfy.model_management.get_torch_device() + latent = latent_image + latent_image = latent["samples"] + + end_at_step = min(end_at_step, steps-1) + end_at_step = steps - end_at_step + + noise = torch.zeros(latent_image.size(), dtype=latent_image.dtype, layout=latent_image.layout, device="cpu") + noise_mask = None + if "noise_mask" in latent: + noise_mask = comfy.sample.prepare_mask(latent["noise_mask"], noise.shape, device) + + real_model = None + real_model = model.model + + noise = noise.to(device) + latent_image = latent_image.to(device) + + positive = comfy.sample.convert_cond(positive) + negative = comfy.sample.convert_cond(negative) + + models, inference_memory = comfy.sample.get_additional_models(positive, negative, model.model_dtype()) + + comfy.model_management.load_models_gpu([model] + models, model.memory_required(noise.shape) + inference_memory) + + sampler = comfy.samplers.KSampler(real_model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=1.0, model_options=model.model_options) + + sigmas = sigmas = sampler.sigmas.flip(0) + 0.0001 + + pbar = comfy.utils.ProgressBar(steps) + def callback(step, x0, x, total_steps): + pbar.update_absolute(step + 1, total_steps) + + samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, force_full_denoise=False, denoise_mask=noise_mask, sigmas=sigmas, start_step=0, last_step=end_at_step, callback=callback) + if normalize: + #technically doesn't normalize because unsampling is not guaranteed to end at a std given by the schedule + samples -= samples.mean() + samples /= samples.std() + samples = samples.cpu() + + comfy.sample.cleanup_additional_models(models) + + out = latent.copy() + out["samples"] = samples + return (out, ) + +NODE_CLASS_MAPPINGS = { + "BNK_NoisyLatentImage": NoisyLatentImage, + #"BNK_DuplicateBatchIndex": DuplicateBatchIndex, + "BNK_SlerpLatent": LatentSlerp, + "BNK_GetSigma": GetSigma, + "BNK_InjectNoise": InjectNoise, + "BNK_Unsampler": Unsampler, +} + +NODE_DISPLAY_NAME_MAPPINGS = { + "BNK_NoisyLatentImage": "Noisy Latent Image", + #"BNK_DuplicateBatchIndex": "Duplicate Batch Index", + "BNK_SlerpLatent": "Slerp Latents", + "BNK_GetSigma": "Get Sigma", + "BNK_InjectNoise": "Inject Noise", + "BNK_Unsampler": "Unsampler", +} diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/LICENSE b/custom_nodes/ComfyUI_UltimateSDUpscale/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..e62ec04cdeece724caeeeeaeb6ae1f6af1bb6b9a --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/LICENSE @@ -0,0 +1,674 @@ +GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/README.md b/custom_nodes/ComfyUI_UltimateSDUpscale/README.md new file mode 100644 index 0000000000000000000000000000000000000000..bf9b379ff0c6d8c648e45ebcdf01200964b646cb --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/README.md @@ -0,0 +1,34 @@ +# ComfyUI_UltimateSDUpscale + + [ComfyUI](https://github.com/comfyanonymous/ComfyUI) nodes for the [Ultimate Stable Diffusion Upscale script by Coyote-A](https://github.com/Coyote-A/ultimate-upscale-for-automatic1111). This is a wrapper for the script used in the A1111 extension. + +## Installation + +Enter the following command from the commandline starting in ComfyUI/custom_nodes/ +``` +git clone https://github.com/ssitu/ComfyUI_UltimateSDUpscale --recursive +``` + +## Usage + +Nodes can be found in the node menu under `image/upscaling`: + +|Node|Description| +| --- | --- | +| Ultimate SD Upscale | The primary node that has the most of the inputs as the original extension script. | +| Ultimate SD Upscale
(No Upscale) | Same as the primary node, but without the upscale inputs and assumes that the input image is already upscaled. Use this if you already have an upscaled image or just want to do the tiled sampling. | + +--- + +Details about most of the parameters can be found [here](https://github.com/Coyote-A/ultimate-upscale-for-automatic1111/wiki/FAQ#parameters-descriptions). + +Parameters not found in the original repository: + +* `upscale_by` The number to multiply the width and height of the image by. If you want to specify an exact width and height, use the "No Upscale" version of the node and perform the upscaling separately (e.g., ImageUpscaleWithModel -> ImageScale -> UltimateSDUpscaleNoUpscale). +* `force_uniform_tiles` If enabled, tiles that would be cut off by the edges of the image will expand the tile using the rest of the image to keep the same tile size determined by `tile_width` and `tile_height`, which is what the A1111 Web UI does. If disabled, the minimal size for tiles will be used, which may make the sampling faster but may cause artifacts due to irregular tile sizes. + +## Examples + +#### Using the ControlNet tile model: + +![image](https://github.com/ssitu/ComfyUI_UltimateSDUpscale/assets/57548627/64f8d3b2-10ae-45ee-9f8a-40b798a51655) diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/__init__.py b/custom_nodes/ComfyUI_UltimateSDUpscale/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..359c0a729ce5e2e7db5ba588fd90e1ce2480e2a8 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/__init__.py @@ -0,0 +1,39 @@ +import sys +import os +repo_dir = os.path.dirname(os.path.realpath(__file__)) +sys.path.insert(0, repo_dir) +original_modules = sys.modules.copy() + +# Place aside potentially conflicting modules +modules_used = [ + "modules", + "modules.devices", + "modules.images", + "modules.processing", + "modules.scripts", + "modules.shared", + "modules.upscaler", + "utils", +] +original_imported_modules = {} +for module in modules_used: + if module in sys.modules: + original_imported_modules[module] = sys.modules.pop(module) + +# Proceed with node setup +from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS +__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS"] + +# Clean up imports +# Remove repo directory from path +sys.path.remove(repo_dir) +# Remove any new modules +modules_to_remove = [] +for module in sys.modules: + if module not in original_modules: + modules_to_remove.append(module) +for module in modules_to_remove: + del sys.modules[module] + +# Restore original modules +sys.modules.update(original_imported_modules) diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/gradio.py b/custom_nodes/ComfyUI_UltimateSDUpscale/gradio.py new file mode 100644 index 0000000000000000000000000000000000000000..0baca4418b105ad30b5f0084d2bbd72b51d19d20 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/gradio.py @@ -0,0 +1 @@ +# Empty gradio module for the ultimate-upscale.py import because gradio is not needed \ No newline at end of file diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/modules/devices.py b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/devices.py new file mode 100644 index 0000000000000000000000000000000000000000..3e37b88d60d4043b032d895bcd9dae5251fd73c2 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/devices.py @@ -0,0 +1,2 @@ +def torch_gc(): + pass diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/modules/images.py b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/images.py new file mode 100644 index 0000000000000000000000000000000000000000..502c819a5cd5d7dad43061020f44be7fd01430e9 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/images.py @@ -0,0 +1,8 @@ +from PIL import Image + + +def flatten(img, bgcolor): + # Replace transparency with bgcolor + if img.mode in ("RGB"): + return img + return Image.alpha_composite(Image.new("RGBA", img.size, bgcolor), img).convert("RGB") diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/modules/processing.py b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/processing.py new file mode 100644 index 0000000000000000000000000000000000000000..f001f0e63c4a472fdb8774240391bfdf8e17936b --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/processing.py @@ -0,0 +1,163 @@ +from PIL import Image, ImageFilter +import torch +import math +from nodes import common_ksampler, VAEEncode, VAEDecode, VAEDecodeTiled +from utils import pil_to_tensor, tensor_to_pil, get_crop_region, expand_crop, crop_cond +from modules import shared + +if (not hasattr(Image, 'Resampling')): # For older versions of Pillow + Image.Resampling = Image + + +class StableDiffusionProcessing: + + def __init__(self, init_img, model, positive, negative, vae, seed, steps, cfg, sampler_name, scheduler, denoise, upscale_by, uniform_tile_mode, tiled_decode): + # Variables used by the USDU script + self.init_images = [init_img] + self.image_mask = None + self.mask_blur = 0 + self.inpaint_full_res_padding = 0 + self.width = init_img.width + self.height = init_img.height + + # ComfyUI Sampler inputs + self.model = model + self.positive = positive + self.negative = negative + self.vae = vae + self.seed = seed + self.steps = steps + self.cfg = cfg + self.sampler_name = sampler_name + self.scheduler = scheduler + self.denoise = denoise + + # Variables used only by this script + self.init_size = init_img.width, init_img.height + self.upscale_by = upscale_by + self.uniform_tile_mode = uniform_tile_mode + self.tiled_decode = tiled_decode + self.vae_decoder = VAEDecode() + self.vae_encoder = VAEEncode() + self.vae_decoder_tiled = VAEDecodeTiled() + + # Other required A1111 variables for the USDU script that is currently unused in this script + self.extra_generation_params = {} + + +class Processed: + + def __init__(self, p: StableDiffusionProcessing, images: list, seed: int, info: str): + self.images = images + self.seed = seed + self.info = info + + def infotext(self, p: StableDiffusionProcessing, index): + return None + + +def fix_seed(p: StableDiffusionProcessing): + pass + + +def process_images(p: StableDiffusionProcessing) -> Processed: + # Where the main image generation happens in A1111 + + # Setup + image_mask = p.image_mask.convert('L') + init_image = p.init_images[0] + + # Locate the white region of the mask outlining the tile and add padding + crop_region = get_crop_region(image_mask, p.inpaint_full_res_padding) + + if p.uniform_tile_mode: + # Expand the crop region to match the processing size ratio and then resize it to the processing size + x1, y1, x2, y2 = crop_region + crop_width = x2 - x1 + crop_height = y2 - y1 + crop_ratio = crop_width / crop_height + p_ratio = p.width / p.height + if crop_ratio > p_ratio: + target_width = crop_width + target_height = round(crop_width / p_ratio) + else: + target_width = round(crop_height * p_ratio) + target_height = crop_height + crop_region, _ = expand_crop(crop_region, image_mask.width, image_mask.height, target_width, target_height) + tile_size = p.width, p.height + else: + # Uses the minimal size that can fit the mask, minimizes tile size but may lead to image sizes that the model is not trained on + x1, y1, x2, y2 = crop_region + crop_width = x2 - x1 + crop_height = y2 - y1 + target_width = math.ceil(crop_width / 8) * 8 + target_height = math.ceil(crop_height / 8) * 8 + crop_region, tile_size = expand_crop(crop_region, image_mask.width, + image_mask.height, target_width, target_height) + + # Blur the mask + if p.mask_blur > 0: + image_mask = image_mask.filter(ImageFilter.GaussianBlur(p.mask_blur)) + + # Crop the images to get the tiles that will be used for generation + tiles = [img.crop(crop_region) for img in shared.batch] + + # Assume the same size for all images in the batch + initial_tile_size = tiles[0].size + + # Resize if necessary + for i, tile in enumerate(tiles): + if tile.size != tile_size: + tiles[i] = tile.resize(tile_size, Image.Resampling.LANCZOS) + + # Crop conditioning + positive_cropped = crop_cond(p.positive, crop_region, p.init_size, init_image.size, tile_size) + negative_cropped = crop_cond(p.negative, crop_region, p.init_size, init_image.size, tile_size) + + # Encode the image + batched_tiles = torch.cat([pil_to_tensor(tile) for tile in tiles], dim=0) + (latent,) = p.vae_encoder.encode(p.vae, batched_tiles) + + # Generate samples + (samples,) = common_ksampler(p.model, p.seed, p.steps, p.cfg, p.sampler_name, + p.scheduler, positive_cropped, negative_cropped, latent, denoise=p.denoise) + + # Decode the sample + if not p.tiled_decode: + (decoded,) = p.vae_decoder.decode(p.vae, samples) + else: + print("[USDU] Using tiled decode") + (decoded,) = p.vae_decoder_tiled.decode(p.vae, samples, 512) # Default tile size is 512 + + # Convert the sample to a PIL image + tiles_sampled = [tensor_to_pil(decoded, i) for i in range(len(decoded))] + + for i, tile_sampled in enumerate(tiles_sampled): + init_image = shared.batch[i] + + # Resize back to the original size + if tile_sampled.size != initial_tile_size: + tile_sampled = tile_sampled.resize(initial_tile_size, Image.Resampling.LANCZOS) + + + # Put the tile into position + image_tile_only = Image.new('RGBA', init_image.size) + image_tile_only.paste(tile_sampled, crop_region[:2]) + + # Add the mask as an alpha channel + # Must make a copy due to the possibility of an edge becoming black + temp = image_tile_only.copy() + temp.putalpha(image_mask) + image_tile_only.paste(temp, image_tile_only) + + # Add back the tile to the initial image according to the mask in the alpha channel + result = init_image.convert('RGBA') + result.alpha_composite(image_tile_only) + + # Convert back to RGB + result = result.convert('RGB') + + shared.batch[i] = result + + processed = Processed(p, [shared.batch[0]], p.seed, None) + return processed diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/modules/scripts.py b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/scripts.py new file mode 100644 index 0000000000000000000000000000000000000000..5cbd134fc07811ffaad9b9f05033603d32c29c52 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/scripts.py @@ -0,0 +1,2 @@ +class Script: + pass diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/modules/shared.py b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/shared.py new file mode 100644 index 0000000000000000000000000000000000000000..9d4bdd7217e8a66944c469d782de2474589801a9 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/shared.py @@ -0,0 +1,24 @@ +class Options: + img2img_background_color = "#ffffff" # Set to white for now + + +class State: + interrupted = False + + def begin(self): + pass + + def end(self): + pass + + +opts = Options() +state = State() + +# Will only ever hold 1 upscaler +sd_upscalers = [None] +# The upscaler usable by ComfyUI nodes +actual_upscaler = None + +# Batch of images to upscale +batch = None diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/modules/upscaler.py b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/upscaler.py new file mode 100644 index 0000000000000000000000000000000000000000..b05f547fbc9af775ec528c744b59fa872beb7fc5 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/modules/upscaler.py @@ -0,0 +1,30 @@ +from PIL import Image +from utils import tensor_to_pil, pil_to_tensor +from comfy_extras.nodes_upscale_model import ImageUpscaleWithModel +from modules import shared + +if (not hasattr(Image, 'Resampling')): # For older versions of Pillow + Image.Resampling = Image + + +class Upscaler: + + def _upscale(self, img: Image, scale): + if (shared.actual_upscaler is None): + return img.resize((img.width * scale, img.height * scale), Image.Resampling.NEAREST) + tensor = pil_to_tensor(img) + image_upscale_node = ImageUpscaleWithModel() + (upscaled,) = image_upscale_node.upscale(shared.actual_upscaler, tensor) + return tensor_to_pil(upscaled) + + def upscale(self, img: Image, scale, selected_model: str = None): + shared.batch = [self._upscale(img, scale) for img in shared.batch] + return shared.batch[0] + + +class UpscalerData: + name = "" + data_path = "" + + def __init__(self): + self.scaler = Upscaler() diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/nodes.py b/custom_nodes/ComfyUI_UltimateSDUpscale/nodes.py new file mode 100644 index 0000000000000000000000000000000000000000..6dc5292b5845cc030ef25344d6db61e80cfe1219 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/nodes.py @@ -0,0 +1,191 @@ +# ComfyUI Node for Ultimate SD Upscale by Coyote-A: https://github.com/Coyote-A/ultimate-upscale-for-automatic1111 + +import torch +import comfy +from usdu_patch import usdu +from utils import tensor_to_pil, pil_to_tensor +from modules.processing import StableDiffusionProcessing +import modules.shared as shared +from modules.upscaler import UpscalerData + +MAX_RESOLUTION = 8192 +# The modes available for Ultimate SD Upscale +MODES = { + "Linear": usdu.USDUMode.LINEAR, + "Chess": usdu.USDUMode.CHESS, + "None": usdu.USDUMode.NONE, +} +# The seam fix modes +SEAM_FIX_MODES = { + "None": usdu.USDUSFMode.NONE, + "Band Pass": usdu.USDUSFMode.BAND_PASS, + "Half Tile": usdu.USDUSFMode.HALF_TILE, + "Half Tile + Intersections": usdu.USDUSFMode.HALF_TILE_PLUS_INTERSECTIONS, +} + + +def USDU_base_inputs(): + return [ + ("image", ("IMAGE",)), + # Sampling Params + ("model", ("MODEL",)), + ("positive", ("CONDITIONING",)), + ("negative", ("CONDITIONING",)), + ("vae", ("VAE",)), + ("upscale_by", ("FLOAT", {"default": 2, "min": 0.05, "max": 4, "step": 0.05})), + ("seed", ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff})), + ("steps", ("INT", {"default": 20, "min": 1, "max": 10000, "step": 1})), + ("cfg", ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0})), + ("sampler_name", (comfy.samplers.KSampler.SAMPLERS,)), + ("scheduler", (comfy.samplers.KSampler.SCHEDULERS,)), + ("denoise", ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0, "step": 0.01})), + # Upscale Params + ("upscale_model", ("UPSCALE_MODEL",)), + ("mode_type", (list(MODES.keys()),)), + ("tile_width", ("INT", {"default": 512, "min": 64, "max": MAX_RESOLUTION, "step": 8})), + ("tile_height", ("INT", {"default": 512, "min": 64, "max": MAX_RESOLUTION, "step": 8})), + ("mask_blur", ("INT", {"default": 8, "min": 0, "max": 64, "step": 1})), + ("tile_padding", ("INT", {"default": 32, "min": 0, "max": MAX_RESOLUTION, "step": 8})), + # Seam fix params + ("seam_fix_mode", (list(SEAM_FIX_MODES.keys()),)), + ("seam_fix_denoise", ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01})), + ("seam_fix_width", ("INT", {"default": 64, "min": 0, "max": MAX_RESOLUTION, "step": 8})), + ("seam_fix_mask_blur", ("INT", {"default": 8, "min": 0, "max": 64, "step": 1})), + ("seam_fix_padding", ("INT", {"default": 16, "min": 0, "max": MAX_RESOLUTION, "step": 8})), + # Misc + ("force_uniform_tiles", ("BOOLEAN", {"default": True})), + ("tiled_decode", ("BOOLEAN", {"default": False})), + ] + + +def prepare_inputs(required: list, optional: list = None): + inputs = {} + if required: + inputs["required"] = {} + for name, type in required: + inputs["required"][name] = type + if optional: + inputs["optional"] = {} + for name, type in optional: + inputs["optional"][name] = type + return inputs + + +def remove_input(inputs: list, input_name: str): + for i, (n, _) in enumerate(inputs): + if n == input_name: + del inputs[i] + break + + +def rename_input(inputs: list, old_name: str, new_name: str): + for i, (n, t) in enumerate(inputs): + if n == old_name: + inputs[i] = (new_name, t) + break + + +class UltimateSDUpscale: + @classmethod + def INPUT_TYPES(s): + return prepare_inputs(USDU_base_inputs()) + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "upscale" + CATEGORY = "image/upscaling" + + def upscale(self, image, model, positive, negative, vae, upscale_by, seed, + steps, cfg, sampler_name, scheduler, denoise, upscale_model, + mode_type, tile_width, tile_height, mask_blur, tile_padding, + seam_fix_mode, seam_fix_denoise, seam_fix_mask_blur, + seam_fix_width, seam_fix_padding, force_uniform_tiles, tiled_decode): + # + # Set up A1111 patches + # + + # Upscaler + # An object that the script works with + shared.sd_upscalers[0] = UpscalerData() + # Where the actual upscaler is stored, will be used when the script upscales using the Upscaler in UpscalerData + shared.actual_upscaler = upscale_model + + # Set the batch of images + shared.batch = [tensor_to_pil(image, i) for i in range(len(image))] + + # Processing + sdprocessing = StableDiffusionProcessing( + tensor_to_pil(image), model, positive, negative, vae, + seed, steps, cfg, sampler_name, scheduler, denoise, upscale_by, force_uniform_tiles, tiled_decode + ) + + # + # Running the script + # + script = usdu.Script() + processed = script.run(p=sdprocessing, _=None, tile_width=tile_width, tile_height=tile_height, + mask_blur=mask_blur, padding=tile_padding, seams_fix_width=seam_fix_width, + seams_fix_denoise=seam_fix_denoise, seams_fix_padding=seam_fix_padding, + upscaler_index=0, save_upscaled_image=False, redraw_mode=MODES[mode_type], + save_seams_fix_image=False, seams_fix_mask_blur=seam_fix_mask_blur, + seams_fix_type=SEAM_FIX_MODES[seam_fix_mode], target_size_type=2, + custom_width=None, custom_height=None, custom_scale=upscale_by) + + # Return the resulting images + images = [pil_to_tensor(img) for img in shared.batch] + tensor = torch.cat(images, dim=0) + return (tensor,) + + +class UltimateSDUpscaleNoUpscale: + @classmethod + def INPUT_TYPES(s): + required = USDU_base_inputs() + remove_input(required, "upscale_model") + remove_input(required, "upscale_by") + rename_input(required, "image", "upscaled_image") + return prepare_inputs(required) + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "upscale" + CATEGORY = "image/upscaling" + + def upscale(self, upscaled_image, model, positive, negative, vae, seed, + steps, cfg, sampler_name, scheduler, denoise, + mode_type, tile_width, tile_height, mask_blur, tile_padding, + seam_fix_mode, seam_fix_denoise, seam_fix_mask_blur, + seam_fix_width, seam_fix_padding, force_uniform_tiles, tiled_decode): + + shared.sd_upscalers[0] = UpscalerData() + shared.actual_upscaler = None + shared.batch = [tensor_to_pil(upscaled_image, i) for i in range(len(upscaled_image))] + sdprocessing = StableDiffusionProcessing( + tensor_to_pil(upscaled_image), model, positive, negative, vae, + seed, steps, cfg, sampler_name, scheduler, denoise, 1, force_uniform_tiles, tiled_decode + ) + + script = usdu.Script() + processed = script.run(p=sdprocessing, _=None, tile_width=tile_width, tile_height=tile_height, + mask_blur=mask_blur, padding=tile_padding, seams_fix_width=seam_fix_width, + seams_fix_denoise=seam_fix_denoise, seams_fix_padding=seam_fix_padding, + upscaler_index=0, save_upscaled_image=False, redraw_mode=MODES[mode_type], + save_seams_fix_image=False, seams_fix_mask_blur=seam_fix_mask_blur, + seams_fix_type=SEAM_FIX_MODES[seam_fix_mode], target_size_type=2, + custom_width=None, custom_height=None, custom_scale=1) + + images = [pil_to_tensor(img) for img in shared.batch] + tensor = torch.cat(images, dim=0) + return (tensor,) + + +# A dictionary that contains all nodes you want to export with their names +# NOTE: names should be globally unique +NODE_CLASS_MAPPINGS = { + "UltimateSDUpscale": UltimateSDUpscale, + "UltimateSDUpscaleNoUpscale": UltimateSDUpscaleNoUpscale +} + +# A dictionary that contains the friendly/humanly readable titles for the nodes +NODE_DISPLAY_NAME_MAPPINGS = { + "UltimateSDUpscale": "Ultimate SD Upscale", + "UltimateSDUpscaleNoUpscale": "Ultimate SD Upscale (No Upscale)" +} diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/__init__.py b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a30c56508421465568ab2697a638bafb3e9326ca --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/__init__.py @@ -0,0 +1,14 @@ +import os +import sys +import importlib.util + +repositories_path = os.path.dirname(os.path.realpath(__file__)) + +# Import the script +script_name = os.path.join("scripts", "ultimate-upscale") +repo_name = "ultimate_sd_upscale" +script_path = os.path.join(repositories_path, repo_name, f"{script_name}.py") +spec = importlib.util.spec_from_file_location(script_name, script_path) +ultimate_upscale = importlib.util.module_from_spec(spec) +sys.modules[script_name] = ultimate_upscale +spec.loader.exec_module(ultimate_upscale) diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/.gitignore b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..600d2d33badf45cc068e01d2e3c837e11c417bc4 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/.gitignore @@ -0,0 +1 @@ +.vscode \ No newline at end of file diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/LICENSE b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..ebfe3f5212b6396c75ee993947fe1ebdd6a91207 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + ultimate-upscale-for-automatic1111 + Copyright (C) 2023 Mirzam + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) 2023 Mirzam + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/README.md b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d158139e43aa9f19a85b013299fd8fd6b03d405f --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/README.md @@ -0,0 +1,43 @@ +# Ultimate SD Upscale extension for [AUTOMATIC1111 Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) +Now you have the opportunity to use a large denoise (0.3-0.5) and not spawn many artifacts. Works on any video card, since you can use a 512x512 tile size and the image will converge. + +News channel: https://t.me/usdunews + +# Instructions +All instructions can be found on the project's [wiki](https://github.com/Coyote-A/ultimate-upscale-for-automatic1111/wiki). + +# Examples +More on [wiki page](https://github.com/Coyote-A/ultimate-upscale-for-automatic1111/wiki/Examples) + +
+ E1 + Original image + + ![Original](https://i.imgur.com/J8mRYOD.png) + + 2k upscaled. **Tile size**: 512, **Padding**: 32, **Mask blur**: 16, **Denoise**: 0.4 + ![2k upscale](https://i.imgur.com/0aKua4r.png) +
+ +
+ E2 + Original image + + ![Original](https://i.imgur.com/aALNI2w.png) + + 2k upscaled. **Tile size**: 768, **Padding**: 55, **Mask blur**: 20, **Denoise**: 0.35 + ![2k upscale](https://i.imgur.com/B5PHz0J.png) + + 4k upscaled. **Tile size**: 768, **Padding**: 55, **Mask blur**: 20, **Denoise**: 0.35 + ![4k upscale](https://i.imgur.com/tIUQ7TJ.jpg) +
+ +
+ E3 + Original image + + ![Original](https://i.imgur.com/AGtszA8.png) + + 4k upscaled. **Tile size**: 768, **Padding**: 55, **Mask blur**: 20, **Denoise**: 0.4 + ![4k upscale](https://i.imgur.com/LCYLfCs.jpg) +
diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py new file mode 100644 index 0000000000000000000000000000000000000000..7bb7ae02b4629b9b171e3ff027851910e5f0ea43 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py @@ -0,0 +1,557 @@ +import math +import gradio as gr +from PIL import Image, ImageDraw, ImageOps +from modules import processing, shared, images, devices, scripts +from modules.processing import StableDiffusionProcessing +from modules.processing import Processed +from modules.shared import opts, state +from enum import Enum + +class USDUMode(Enum): + LINEAR = 0 + CHESS = 1 + NONE = 2 + +class USDUSFMode(Enum): + NONE = 0 + BAND_PASS = 1 + HALF_TILE = 2 + HALF_TILE_PLUS_INTERSECTIONS = 3 + +class USDUpscaler(): + + def __init__(self, p, image, upscaler_index:int, save_redraw, save_seams_fix, tile_width, tile_height) -> None: + self.p:StableDiffusionProcessing = p + self.image:Image = image + self.scale_factor = math.ceil(max(p.width, p.height) / max(image.width, image.height)) + self.upscaler = shared.sd_upscalers[upscaler_index] + self.redraw = USDURedraw() + self.redraw.save = save_redraw + self.redraw.tile_width = tile_width if tile_width > 0 else tile_height + self.redraw.tile_height = tile_height if tile_height > 0 else tile_width + self.seams_fix = USDUSeamsFix() + self.seams_fix.save = save_seams_fix + self.seams_fix.tile_width = tile_width if tile_width > 0 else tile_height + self.seams_fix.tile_height = tile_height if tile_height > 0 else tile_width + self.initial_info = None + self.rows = math.ceil(self.p.height / self.redraw.tile_height) + self.cols = math.ceil(self.p.width / self.redraw.tile_width) + + def get_factor(self, num): + # Its just return, don't need elif + if num == 1: + return 2 + if num % 4 == 0: + return 4 + if num % 3 == 0: + return 3 + if num % 2 == 0: + return 2 + return 0 + + def get_factors(self): + scales = [] + current_scale = 1 + current_scale_factor = self.get_factor(self.scale_factor) + while current_scale_factor == 0: + self.scale_factor += 1 + current_scale_factor = self.get_factor(self.scale_factor) + while current_scale < self.scale_factor: + current_scale_factor = self.get_factor(self.scale_factor // current_scale) + scales.append(current_scale_factor) + current_scale = current_scale * current_scale_factor + if current_scale_factor == 0: + break + self.scales = enumerate(scales) + + def upscale(self): + # Log info + print(f"Canva size: {self.p.width}x{self.p.height}") + print(f"Image size: {self.image.width}x{self.image.height}") + print(f"Scale factor: {self.scale_factor}") + # Check upscaler is not empty + if self.upscaler.name == "None": + self.image = self.image.resize((self.p.width, self.p.height), resample=Image.LANCZOS) + return + # Get list with scale factors + self.get_factors() + # Upscaling image over all factors + for index, value in self.scales: + print(f"Upscaling iteration {index+1} with scale factor {value}") + self.image = self.upscaler.scaler.upscale(self.image, value, self.upscaler.data_path) + # Resize image to set values + self.image = self.image.resize((self.p.width, self.p.height), resample=Image.LANCZOS) + + def setup_redraw(self, redraw_mode, padding, mask_blur): + self.redraw.mode = USDUMode(redraw_mode) + self.redraw.enabled = self.redraw.mode != USDUMode.NONE + self.redraw.padding = padding + self.p.mask_blur = mask_blur + + def setup_seams_fix(self, padding, denoise, mask_blur, width, mode): + self.seams_fix.padding = padding + self.seams_fix.denoise = denoise + self.seams_fix.mask_blur = mask_blur + self.seams_fix.width = width + self.seams_fix.mode = USDUSFMode(mode) + self.seams_fix.enabled = self.seams_fix.mode != USDUSFMode.NONE + + def save_image(self): + if type(self.p.prompt) != list: + images.save_image(self.image, self.p.outpath_samples, "", self.p.seed, self.p.prompt, opts.samples_format, info=self.initial_info, p=self.p) + else: + images.save_image(self.image, self.p.outpath_samples, "", self.p.seed, self.p.prompt[0], opts.samples_format, info=self.initial_info, p=self.p) + + def calc_jobs_count(self): + redraw_job_count = (self.rows * self.cols) if self.redraw.enabled else 0 + seams_job_count = 0 + if self.seams_fix.mode == USDUSFMode.BAND_PASS: + seams_job_count = self.rows + self.cols - 2 + elif self.seams_fix.mode == USDUSFMode.HALF_TILE: + seams_job_count = self.rows * (self.cols - 1) + (self.rows - 1) * self.cols + elif self.seams_fix.mode == USDUSFMode.HALF_TILE_PLUS_INTERSECTIONS: + seams_job_count = self.rows * (self.cols - 1) + (self.rows - 1) * self.cols + (self.rows - 1) * (self.cols - 1) + + state.job_count = redraw_job_count + seams_job_count + + def print_info(self): + print(f"Tile size: {self.redraw.tile_width}x{self.redraw.tile_height}") + print(f"Tiles amount: {self.rows * self.cols}") + print(f"Grid: {self.rows}x{self.cols}") + print(f"Redraw enabled: {self.redraw.enabled}") + print(f"Seams fix mode: {self.seams_fix.mode.name}") + + def add_extra_info(self): + self.p.extra_generation_params["Ultimate SD upscale upscaler"] = self.upscaler.name + self.p.extra_generation_params["Ultimate SD upscale tile_width"] = self.redraw.tile_width + self.p.extra_generation_params["Ultimate SD upscale tile_height"] = self.redraw.tile_height + self.p.extra_generation_params["Ultimate SD upscale mask_blur"] = self.p.mask_blur + self.p.extra_generation_params["Ultimate SD upscale padding"] = self.redraw.padding + + def process(self): + state.begin() + self.calc_jobs_count() + self.result_images = [] + if self.redraw.enabled: + self.image = self.redraw.start(self.p, self.image, self.rows, self.cols) + self.initial_info = self.redraw.initial_info + self.result_images.append(self.image) + if self.redraw.save: + self.save_image() + + if self.seams_fix.enabled: + self.image = self.seams_fix.start(self.p, self.image, self.rows, self.cols) + self.initial_info = self.seams_fix.initial_info + self.result_images.append(self.image) + if self.seams_fix.save: + self.save_image() + state.end() + +class USDURedraw(): + + def init_draw(self, p, width, height): + p.inpaint_full_res = True + p.inpaint_full_res_padding = self.padding + p.width = math.ceil((self.tile_width+self.padding) / 64) * 64 + p.height = math.ceil((self.tile_height+self.padding) / 64) * 64 + mask = Image.new("L", (width, height), "black") + draw = ImageDraw.Draw(mask) + return mask, draw + + def calc_rectangle(self, xi, yi): + x1 = xi * self.tile_width + y1 = yi * self.tile_height + x2 = xi * self.tile_width + self.tile_width + y2 = yi * self.tile_height + self.tile_height + + return x1, y1, x2, y2 + + def linear_process(self, p, image, rows, cols): + mask, draw = self.init_draw(p, image.width, image.height) + for yi in range(rows): + for xi in range(cols): + if state.interrupted: + break + draw.rectangle(self.calc_rectangle(xi, yi), fill="white") + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + draw.rectangle(self.calc_rectangle(xi, yi), fill="black") + if (len(processed.images) > 0): + image = processed.images[0] + + p.width = image.width + p.height = image.height + self.initial_info = processed.infotext(p, 0) + + return image + + def chess_process(self, p, image, rows, cols): + mask, draw = self.init_draw(p, image.width, image.height) + tiles = [] + # calc tiles colors + for yi in range(rows): + for xi in range(cols): + if state.interrupted: + break + if xi == 0: + tiles.append([]) + color = xi % 2 == 0 + if yi > 0 and yi % 2 != 0: + color = not color + tiles[yi].append(color) + + for yi in range(len(tiles)): + for xi in range(len(tiles[yi])): + if state.interrupted: + break + if not tiles[yi][xi]: + tiles[yi][xi] = not tiles[yi][xi] + continue + tiles[yi][xi] = not tiles[yi][xi] + draw.rectangle(self.calc_rectangle(xi, yi), fill="white") + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + draw.rectangle(self.calc_rectangle(xi, yi), fill="black") + if (len(processed.images) > 0): + image = processed.images[0] + + for yi in range(len(tiles)): + for xi in range(len(tiles[yi])): + if state.interrupted: + break + if not tiles[yi][xi]: + continue + draw.rectangle(self.calc_rectangle(xi, yi), fill="white") + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + draw.rectangle(self.calc_rectangle(xi, yi), fill="black") + if (len(processed.images) > 0): + image = processed.images[0] + + p.width = image.width + p.height = image.height + self.initial_info = processed.infotext(p, 0) + + return image + + def start(self, p, image, rows, cols): + self.initial_info = None + if self.mode == USDUMode.LINEAR: + return self.linear_process(p, image, rows, cols) + if self.mode == USDUMode.CHESS: + return self.chess_process(p, image, rows, cols) + +class USDUSeamsFix(): + + def init_draw(self, p): + self.initial_info = None + p.width = math.ceil((self.tile_width+self.padding) / 64) * 64 + p.height = math.ceil((self.tile_height+self.padding) / 64) * 64 + + def half_tile_process(self, p, image, rows, cols): + + self.init_draw(p) + processed = None + + gradient = Image.linear_gradient("L") + row_gradient = Image.new("L", (self.tile_width, self.tile_height), "black") + row_gradient.paste(gradient.resize( + (self.tile_width, self.tile_height//2), resample=Image.BICUBIC), (0, 0)) + row_gradient.paste(gradient.rotate(180).resize( + (self.tile_width, self.tile_height//2), resample=Image.BICUBIC), + (0, self.tile_height//2)) + col_gradient = Image.new("L", (self.tile_width, self.tile_height), "black") + col_gradient.paste(gradient.rotate(90).resize( + (self.tile_width//2, self.tile_height), resample=Image.BICUBIC), (0, 0)) + col_gradient.paste(gradient.rotate(270).resize( + (self.tile_width//2, self.tile_height), resample=Image.BICUBIC), (self.tile_width//2, 0)) + + p.denoising_strength = self.denoise + p.mask_blur = self.mask_blur + + for yi in range(rows-1): + for xi in range(cols): + if state.interrupted: + break + p.width = self.tile_width + p.height = self.tile_height + p.inpaint_full_res = True + p.inpaint_full_res_padding = self.padding + mask = Image.new("L", (image.width, image.height), "black") + mask.paste(row_gradient, (xi*self.tile_width, yi*self.tile_height + self.tile_height//2)) + + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + if (len(processed.images) > 0): + image = processed.images[0] + + for yi in range(rows): + for xi in range(cols-1): + if state.interrupted: + break + p.width = self.tile_width + p.height = self.tile_height + p.inpaint_full_res = True + p.inpaint_full_res_padding = self.padding + mask = Image.new("L", (image.width, image.height), "black") + mask.paste(col_gradient, (xi*self.tile_width+self.tile_width//2, yi*self.tile_height)) + + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + if (len(processed.images) > 0): + image = processed.images[0] + + p.width = image.width + p.height = image.height + if processed is not None: + self.initial_info = processed.infotext(p, 0) + + return image + + def half_tile_process_corners(self, p, image, rows, cols): + fixed_image = self.half_tile_process(p, image, rows, cols) + processed = None + self.init_draw(p) + gradient = Image.radial_gradient("L").resize( + (self.tile_width, self.tile_height), resample=Image.BICUBIC) + gradient = ImageOps.invert(gradient) + p.denoising_strength = self.denoise + #p.mask_blur = 0 + p.mask_blur = self.mask_blur + + for yi in range(rows-1): + for xi in range(cols-1): + if state.interrupted: + break + p.width = self.tile_width + p.height = self.tile_height + p.inpaint_full_res = True + p.inpaint_full_res_padding = 0 + mask = Image.new("L", (fixed_image.width, fixed_image.height), "black") + mask.paste(gradient, (xi*self.tile_width + self.tile_width//2, + yi*self.tile_height + self.tile_height//2)) + + p.init_images = [fixed_image] + p.image_mask = mask + processed = processing.process_images(p) + if (len(processed.images) > 0): + fixed_image = processed.images[0] + + p.width = fixed_image.width + p.height = fixed_image.height + if processed is not None: + self.initial_info = processed.infotext(p, 0) + + return fixed_image + + def band_pass_process(self, p, image, cols, rows): + + self.init_draw(p) + processed = None + + p.denoising_strength = self.denoise + p.mask_blur = 0 + + gradient = Image.linear_gradient("L") + mirror_gradient = Image.new("L", (256, 256), "black") + mirror_gradient.paste(gradient.resize((256, 128), resample=Image.BICUBIC), (0, 0)) + mirror_gradient.paste(gradient.rotate(180).resize((256, 128), resample=Image.BICUBIC), (0, 128)) + + row_gradient = mirror_gradient.resize((image.width, self.width), resample=Image.BICUBIC) + col_gradient = mirror_gradient.rotate(90).resize((self.width, image.height), resample=Image.BICUBIC) + + for xi in range(1, rows): + if state.interrupted: + break + p.width = self.width + self.padding * 2 + p.height = image.height + p.inpaint_full_res = True + p.inpaint_full_res_padding = self.padding + mask = Image.new("L", (image.width, image.height), "black") + mask.paste(col_gradient, (xi * self.tile_width - self.width // 2, 0)) + + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + if (len(processed.images) > 0): + image = processed.images[0] + for yi in range(1, cols): + if state.interrupted: + break + p.width = image.width + p.height = self.width + self.padding * 2 + p.inpaint_full_res = True + p.inpaint_full_res_padding = self.padding + mask = Image.new("L", (image.width, image.height), "black") + mask.paste(row_gradient, (0, yi * self.tile_height - self.width // 2)) + + p.init_images = [image] + p.image_mask = mask + processed = processing.process_images(p) + if (len(processed.images) > 0): + image = processed.images[0] + + p.width = image.width + p.height = image.height + if processed is not None: + self.initial_info = processed.infotext(p, 0) + + return image + + def start(self, p, image, rows, cols): + if USDUSFMode(self.mode) == USDUSFMode.BAND_PASS: + return self.band_pass_process(p, image, rows, cols) + elif USDUSFMode(self.mode) == USDUSFMode.HALF_TILE: + return self.half_tile_process(p, image, rows, cols) + elif USDUSFMode(self.mode) == USDUSFMode.HALF_TILE_PLUS_INTERSECTIONS: + return self.half_tile_process_corners(p, image, rows, cols) + else: + return image + +class Script(scripts.Script): + def title(self): + return "Ultimate SD upscale" + + def show(self, is_img2img): + return is_img2img + + def ui(self, is_img2img): + + target_size_types = [ + "From img2img2 settings", + "Custom size", + "Scale from image size" + ] + + seams_fix_types = [ + "None", + "Band pass", + "Half tile offset pass", + "Half tile offset pass + intersections" + ] + + redrow_modes = [ + "Linear", + "Chess", + "None" + ] + + info = gr.HTML( + "

Will upscale the image depending on the selected target size type

") + + with gr.Row(): + target_size_type = gr.Dropdown(label="Target size type", choices=[k for k in target_size_types], type="index", + value=next(iter(target_size_types))) + + custom_width = gr.Slider(label='Custom width', minimum=64, maximum=8192, step=64, value=2048, visible=False, interactive=True) + custom_height = gr.Slider(label='Custom height', minimum=64, maximum=8192, step=64, value=2048, visible=False, interactive=True) + custom_scale = gr.Slider(label='Scale', minimum=1, maximum=16, step=0.01, value=2, visible=False, interactive=True) + + gr.HTML("

Redraw options:

") + with gr.Row(): + upscaler_index = gr.Radio(label='Upscaler', choices=[x.name for x in shared.sd_upscalers], + value=shared.sd_upscalers[0].name, type="index") + with gr.Row(): + redraw_mode = gr.Dropdown(label="Type", choices=[k for k in redrow_modes], type="index", value=next(iter(redrow_modes))) + tile_width = gr.Slider(minimum=0, maximum=2048, step=64, label='Tile width', value=512) + tile_height = gr.Slider(minimum=0, maximum=2048, step=64, label='Tile height', value=0) + mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=8) + padding = gr.Slider(label='Padding', minimum=0, maximum=128, step=1, value=32) + gr.HTML("

Seams fix:

") + with gr.Row(): + seams_fix_type = gr.Dropdown(label="Type", choices=[k for k in seams_fix_types], type="index", value=next(iter(seams_fix_types))) + seams_fix_denoise = gr.Slider(label='Denoise', minimum=0, maximum=1, step=0.01, value=0.35, visible=False, interactive=True) + seams_fix_width = gr.Slider(label='Width', minimum=0, maximum=128, step=1, value=64, visible=False, interactive=True) + seams_fix_mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=4, visible=False, interactive=True) + seams_fix_padding = gr.Slider(label='Padding', minimum=0, maximum=128, step=1, value=16, visible=False, interactive=True) + gr.HTML("

Save options:

") + with gr.Row(): + save_upscaled_image = gr.Checkbox(label="Upscaled", value=True) + save_seams_fix_image = gr.Checkbox(label="Seams fix", value=False) + + def select_fix_type(fix_index): + all_visible = fix_index != 0 + mask_blur_visible = fix_index == 2 or fix_index == 3 + width_visible = fix_index == 1 + + return [gr.update(visible=all_visible), + gr.update(visible=width_visible), + gr.update(visible=mask_blur_visible), + gr.update(visible=all_visible)] + + seams_fix_type.change( + fn=select_fix_type, + inputs=seams_fix_type, + outputs=[seams_fix_denoise, seams_fix_width, seams_fix_mask_blur, seams_fix_padding] + ) + + def select_scale_type(scale_index): + is_custom_size = scale_index == 1 + is_custom_scale = scale_index == 2 + + return [gr.update(visible=is_custom_size), + gr.update(visible=is_custom_size), + gr.update(visible=is_custom_scale), + ] + + target_size_type.change( + fn=select_scale_type, + inputs=target_size_type, + outputs=[custom_width, custom_height, custom_scale] + ) + + return [info, tile_width, tile_height, mask_blur, padding, seams_fix_width, seams_fix_denoise, seams_fix_padding, + upscaler_index, save_upscaled_image, redraw_mode, save_seams_fix_image, seams_fix_mask_blur, + seams_fix_type, target_size_type, custom_width, custom_height, custom_scale] + + def run(self, p, _, tile_width, tile_height, mask_blur, padding, seams_fix_width, seams_fix_denoise, seams_fix_padding, + upscaler_index, save_upscaled_image, redraw_mode, save_seams_fix_image, seams_fix_mask_blur, + seams_fix_type, target_size_type, custom_width, custom_height, custom_scale): + + # Init + processing.fix_seed(p) + devices.torch_gc() + + p.do_not_save_grid = True + p.do_not_save_samples = True + p.inpaint_full_res = False + + p.inpainting_fill = 1 + p.n_iter = 1 + p.batch_size = 1 + + seed = p.seed + + # Init image + init_img = p.init_images[0] + if init_img == None: + return Processed(p, [], seed, "Empty image") + init_img = images.flatten(init_img, opts.img2img_background_color) + + #override size + if target_size_type == 1: + p.width = custom_width + p.height = custom_height + if target_size_type == 2: + p.width = math.ceil((init_img.width * custom_scale) / 64) * 64 + p.height = math.ceil((init_img.height * custom_scale) / 64) * 64 + + # Upscaling + upscaler = USDUpscaler(p, init_img, upscaler_index, save_upscaled_image, save_seams_fix_image, tile_width, tile_height) + upscaler.upscale() + + # Drawing + upscaler.setup_redraw(redraw_mode, padding, mask_blur) + upscaler.setup_seams_fix(seams_fix_padding, seams_fix_denoise, seams_fix_mask_blur, seams_fix_width, seams_fix_type) + upscaler.print_info() + upscaler.add_extra_info() + upscaler.process() + result_images = upscaler.result_images + + return Processed(p, result_images, seed, upscaler.initial_info if upscaler.initial_info is not None else "") + diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/usdu_patch.py b/custom_nodes/ComfyUI_UltimateSDUpscale/usdu_patch.py new file mode 100644 index 0000000000000000000000000000000000000000..3abecf1729de155529718505cb85e3d0ac6341be --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/usdu_patch.py @@ -0,0 +1,66 @@ +# Make some patches to the script +from repositories import ultimate_upscale as usdu +import modules.shared as shared +import math +from PIL import Image + + +if (not hasattr(Image, 'Resampling')): # For older versions of Pillow + Image.Resampling = Image + +# +# Instead of using multiples of 64, use multiples of 8 +# + +# Upscaler +old_init = usdu.USDUpscaler.__init__ + + +def new_init(self, p, image, upscaler_index, save_redraw, save_seams_fix, tile_width, tile_height): + p.width = math.ceil((image.width * p.upscale_by) / 8) * 8 + p.height = math.ceil((image.height * p.upscale_by) / 8) * 8 + old_init(self, p, image, upscaler_index, save_redraw, save_seams_fix, tile_width, tile_height) + + +usdu.USDUpscaler.__init__ = new_init + +# Redraw +old_setup_redraw = usdu.USDURedraw.init_draw + + +def new_setup_redraw(self, p, width, height): + mask, draw = old_setup_redraw(self, p, width, height) + p.width = math.ceil((self.tile_width + self.padding) / 8) * 8 + p.height = math.ceil((self.tile_height + self.padding) / 8) * 8 + return mask, draw + + +usdu.USDURedraw.init_draw = new_setup_redraw + +# Seams fix +old_setup_seams_fix = usdu.USDUSeamsFix.init_draw + + +def new_setup_seams_fix(self, p): + old_setup_seams_fix(self, p) + p.width = math.ceil((self.tile_width + self.padding) / 8) * 8 + p.height = math.ceil((self.tile_height + self.padding) / 8) * 8 + + +usdu.USDUSeamsFix.init_draw = new_setup_seams_fix + + +# +# Make the script upscale on a batch of images instead of one image +# + +old_upscale = usdu.USDUpscaler.upscale + + +def new_upscale(self): + old_upscale(self) + shared.batch = [self.image] + \ + [img.resize((self.p.width, self.p.height), resample=Image.LANCZOS) for img in shared.batch[1:]] + + +usdu.USDUpscaler.upscale = new_upscale diff --git a/custom_nodes/ComfyUI_UltimateSDUpscale/utils.py b/custom_nodes/ComfyUI_UltimateSDUpscale/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..a9bdcf22fd34138cab0dca9259edeb8037ea6de0 --- /dev/null +++ b/custom_nodes/ComfyUI_UltimateSDUpscale/utils.py @@ -0,0 +1,460 @@ +import numpy as np +from PIL import Image, ImageFilter +import torch +import torch.nn.functional as F +from torchvision.transforms import GaussianBlur +import math + +if (not hasattr(Image, 'Resampling')): # For older versions of Pillow + Image.Resampling = Image + +BLUR_KERNEL_SIZE = 15 + + +def tensor_to_pil(img_tensor, batch_index=0): + # Takes an image in a batch in the form of a tensor of shape [batch_size, channels, height, width] + # and returns an PIL Image with the corresponding mode deduced by the number of channels + + # Take the image in the batch given by batch_index + img_tensor = img_tensor[batch_index].unsqueeze(0) + i = 255. * img_tensor.cpu().numpy() + img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8).squeeze()) + return img + + +def pil_to_tensor(image): + # Takes a PIL image and returns a tensor of shape [1, height, width, channels] + image = np.array(image).astype(np.float32) / 255.0 + image = torch.from_numpy(image).unsqueeze(0) + if len(image.shape) == 3: # If the image is grayscale, add a channel dimension + image = image.unsqueeze(-1) + return image + + +def controlnet_hint_to_pil(tensor, batch_index=0): + return tensor_to_pil(tensor.movedim(1, -1), batch_index) + + +def pil_to_controlnet_hint(img): + return pil_to_tensor(img).movedim(-1, 1) + + +def crop_tensor(tensor, region): + # Takes a tensor of shape [batch_size, height, width, channels] and crops it to the given region + x1, y1, x2, y2 = region + return tensor[:, y1:y2, x1:x2, :] + + +def resize_tensor(tensor, size, mode="nearest-exact"): + # Takes a tensor of shape [B, C, H, W] and resizes + # it to a shape of [B, C, size[0], size[1]] using the given mode + return torch.nn.functional.interpolate(tensor, size=size, mode=mode) + + +def get_crop_region(mask, pad=0): + # Takes a black and white PIL image in 'L' mode and returns the coordinates of the white rectangular mask region + # Should be equivalent to the get_crop_region function from https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/masking.py + coordinates = mask.getbbox() + if coordinates is not None: + x1, y1, x2, y2 = coordinates + else: + x1, y1, x2, y2 = mask.width, mask.height, 0, 0 + # Apply padding + x1 = max(x1 - pad, 0) + y1 = max(y1 - pad, 0) + x2 = min(x2 + pad, mask.width) + y2 = min(y2 + pad, mask.height) + return fix_crop_region((x1, y1, x2, y2), (mask.width, mask.height)) + + +def fix_crop_region(region, image_size): + # Remove the extra pixel added by the get_crop_region function + image_width, image_height = image_size + x1, y1, x2, y2 = region + if x2 < image_width: + x2 -= 1 + if y2 < image_height: + y2 -= 1 + return x1, y1, x2, y2 + + +def expand_crop(region, width, height, target_width, target_height): + ''' + Expands a crop region to a specified target size. + :param region: A tuple of the form (x1, y1, x2, y2) denoting the upper left and the lower right points + of the rectangular region. Expected to have x2 > x1 and y2 > y1. + :param width: The width of the image the crop region is from. + :param height: The height of the image the crop region is from. + :param target_width: The desired width of the crop region. + :param target_height: The desired height of the crop region. + ''' + x1, y1, x2, y2 = region + actual_width = x2 - x1 + actual_height = y2 - y1 + # target_width = math.ceil(actual_width / 8) * 8 + # target_height = math.ceil(actual_height / 8) * 8 + + # Try to expand region to the right of half the difference + width_diff = target_width - actual_width + x2 = min(x2 + width_diff // 2, width) + # Expand region to the left of the difference including the pixels that could not be expanded to the right + width_diff = target_width - (x2 - x1) + x1 = max(x1 - width_diff, 0) + # Try the right again + width_diff = target_width - (x2 - x1) + x2 = min(x2 + width_diff, width) + + # Try to expand region to the bottom of half the difference + height_diff = target_height - actual_height + y2 = min(y2 + height_diff // 2, height) + # Expand region to the top of the difference including the pixels that could not be expanded to the bottom + height_diff = target_height - (y2 - y1) + y1 = max(y1 - height_diff, 0) + # Try the bottom again + height_diff = target_height - (y2 - y1) + y2 = min(y2 + height_diff, height) + + return (x1, y1, x2, y2), (target_width, target_height) + + +def resize_region(region, init_size, resize_size): + # Resize a crop so that it fits an image that was resized to the given width and height + x1, y1, x2, y2 = region + init_width, init_height = init_size + resize_width, resize_height = resize_size + x1 = math.floor(x1 * resize_width / init_width) + x2 = math.ceil(x2 * resize_width / init_width) + y1 = math.floor(y1 * resize_height / init_height) + y2 = math.ceil(y2 * resize_height / init_height) + return (x1, y1, x2, y2) + + +def pad_image(image, left_pad, right_pad, top_pad, bottom_pad, fill=False, blur=False): + ''' + Pads an image with the given number of pixels on each side and fills the padding with data from the edges. + :param image: A PIL image + :param left_pad: The number of pixels to pad on the left side + :param right_pad: The number of pixels to pad on the right side + :param top_pad: The number of pixels to pad on the top side + :param bottom_pad: The number of pixels to pad on the bottom side + :param blur: Whether to blur the padded edges + :return: A PIL image with size (image.width + left_pad + right_pad, image.height + top_pad + bottom_pad) + ''' + left_edge = image.crop((0, 1, 1, image.height - 1)) + right_edge = image.crop((image.width - 1, 1, image.width, image.height - 1)) + top_edge = image.crop((1, 0, image.width - 1, 1)) + bottom_edge = image.crop((1, image.height - 1, image.width - 1, image.height)) + new_width = image.width + left_pad + right_pad + new_height = image.height + top_pad + bottom_pad + padded_image = Image.new(image.mode, (new_width, new_height)) + padded_image.paste(image, (left_pad, top_pad)) + if fill: + for i in range(left_pad): + edge = left_edge.resize( + (1, new_height - i * (top_pad + bottom_pad) // left_pad), resample=Image.Resampling.NEAREST) + padded_image.paste(edge, (i, i * top_pad // left_pad)) + for i in range(right_pad): + edge = right_edge.resize( + (1, new_height - i * (top_pad + bottom_pad) // right_pad), resample=Image.Resampling.NEAREST) + padded_image.paste(edge, (new_width - 1 - i, i * top_pad // right_pad)) + for i in range(top_pad): + edge = top_edge.resize( + (new_width - i * (left_pad + right_pad) // top_pad, 1), resample=Image.Resampling.NEAREST) + padded_image.paste(edge, (i * left_pad // top_pad, i)) + for i in range(bottom_pad): + edge = bottom_edge.resize( + (new_width - i * (left_pad + right_pad) // bottom_pad, 1), resample=Image.Resampling.NEAREST) + padded_image.paste(edge, (i * left_pad // bottom_pad, new_height - 1 - i)) + if blur and not (left_pad == right_pad == top_pad == bottom_pad == 0): + padded_image = padded_image.filter(ImageFilter.GaussianBlur(BLUR_KERNEL_SIZE)) + padded_image.paste(image, (left_pad, top_pad)) + return padded_image + + +def pad_image2(image, left_pad, right_pad, top_pad, bottom_pad, fill=False, blur=False): + ''' + Pads an image with the given number of pixels on each side and fills the padding with data from the edges. + Faster than pad_image, but only pads with edge data in straight lines. + :param image: A PIL image + :param left_pad: The number of pixels to pad on the left side + :param right_pad: The number of pixels to pad on the right side + :param top_pad: The number of pixels to pad on the top side + :param bottom_pad: The number of pixels to pad on the bottom side + :param blur: Whether to blur the padded edges + :return: A PIL image with size (image.width + left_pad + right_pad, image.height + top_pad + bottom_pad) + ''' + left_edge = image.crop((0, 1, 1, image.height - 1)) + right_edge = image.crop((image.width - 1, 1, image.width, image.height - 1)) + top_edge = image.crop((1, 0, image.width - 1, 1)) + bottom_edge = image.crop((1, image.height - 1, image.width - 1, image.height)) + new_width = image.width + left_pad + right_pad + new_height = image.height + top_pad + bottom_pad + padded_image = Image.new(image.mode, (new_width, new_height)) + padded_image.paste(image, (left_pad, top_pad)) + if fill: + if left_pad > 0: + padded_image.paste(left_edge.resize((left_pad, new_height), resample=Image.Resampling.NEAREST), (0, 0)) + if right_pad > 0: + padded_image.paste(right_edge.resize((right_pad, new_height), + resample=Image.Resampling.NEAREST), (new_width - right_pad, 0)) + if top_pad > 0: + padded_image.paste(top_edge.resize((new_width, top_pad), resample=Image.Resampling.NEAREST), (0, 0)) + if bottom_pad > 0: + padded_image.paste(bottom_edge.resize((new_width, bottom_pad), + resample=Image.Resampling.NEAREST), (0, new_height - bottom_pad)) + if blur and not (left_pad == right_pad == top_pad == bottom_pad == 0): + padded_image = padded_image.filter(ImageFilter.GaussianBlur(BLUR_KERNEL_SIZE)) + padded_image.paste(image, (left_pad, top_pad)) + return padded_image + + +def pad_tensor(tensor, left_pad, right_pad, top_pad, bottom_pad, fill=False, blur=False): + ''' + Pads an image tensor with the given number of pixels on each side and fills the padding with data from the edges. + :param tensor: A tensor of shape [B, H, W, C] + :param left_pad: The number of pixels to pad on the left side + :param right_pad: The number of pixels to pad on the right side + :param top_pad: The number of pixels to pad on the top side + :param bottom_pad: The number of pixels to pad on the bottom side + :param blur: Whether to blur the padded edges + :return: A tensor of shape [B, H + top_pad + bottom_pad, W + left_pad + right_pad, C] + ''' + batch_size, channels, height, width = tensor.shape + h_pad = left_pad + right_pad + v_pad = top_pad + bottom_pad + new_width = width + h_pad + new_height = height + v_pad + + # Create empty image + padded = torch.zeros((batch_size, channels, new_height, new_width), dtype=tensor.dtype) + + # Copy the original image into the centor of the padded tensor + padded[:, :, top_pad:top_pad + height, left_pad:left_pad + width] = tensor + + # Duplicate the edges of the original image into the padding + if top_pad > 0: + padded[:, :, :top_pad, :] = padded[:, :, top_pad:top_pad + 1, :] # Top edge + if bottom_pad > 0: + padded[:, :, -bottom_pad:, :] = padded[:, :, -bottom_pad - 1:-bottom_pad, :] # Bottom edge + if left_pad > 0: + padded[:, :, :, :left_pad] = padded[:, :, :, left_pad:left_pad + 1] # Left edge + if right_pad > 0: + padded[:, :, :, -right_pad:] = padded[:, :, :, -right_pad - 1:-right_pad] # Right edge + + return padded + + +def resize_and_pad_image(image, width, height, fill=False, blur=False): + ''' + Resizes an image to the given width and height and pads it to the given width and height. + :param image: A PIL image + :param width: The width of the resized image + :param height: The height of the resized image + :param fill: Whether to fill the padding with data from the edges + :param blur: Whether to blur the padded edges + :return: A PIL image of size (width, height) + ''' + width_ratio = width / image.width + height_ratio = height / image.height + if height_ratio > width_ratio: + resize_ratio = width_ratio + else: + resize_ratio = height_ratio + resize_width = round(image.width * resize_ratio) + resize_height = round(image.height * resize_ratio) + resized = image.resize((resize_width, resize_height), resample=Image.Resampling.LANCZOS) + # Pad the sides of the image to get the image to the desired size that wasn't covered by the resize + horizontal_pad = (width - resize_width) // 2 + vertical_pad = (height - resize_height) // 2 + result = pad_image2(resized, horizontal_pad, horizontal_pad, vertical_pad, vertical_pad, fill, blur) + result = result.resize((width, height), resample=Image.Resampling.LANCZOS) + return result, (horizontal_pad, vertical_pad) + + +def resize_and_pad_tensor(tensor, width, height, fill=False, blur=False): + ''' + Resizes an image tensor to the given width and height and pads it to the given width and height. + :param tensor: A tensor of shape [B, H, W, C] + :param width: The width of the resized image + :param height: The height of the resized image + :param fill: Whether to fill the padding with data from the edges + :param blur: Whether to blur the padded edges + :return: A tensor of shape [B, height, width, C] + ''' + # Resize the image to the closest size that maintains the aspect ratio + width_ratio = width / tensor.shape[3] + height_ratio = height / tensor.shape[2] + if height_ratio > width_ratio: + resize_ratio = width_ratio + else: + resize_ratio = height_ratio + resize_width = round(tensor.shape[3] * resize_ratio) + resize_height = round(tensor.shape[2] * resize_ratio) + resized = F.interpolate(tensor, size=(resize_height, resize_width), mode='nearest-exact') + # Pad the sides of the image to get the image to the desired size that wasn't covered by the resize + horizontal_pad = (width - resize_width) // 2 + vertical_pad = (height - resize_height) // 2 + result = pad_tensor(resized, horizontal_pad, horizontal_pad, vertical_pad, vertical_pad, fill, blur) + result = F.interpolate(result, size=(height, width), mode='nearest-exact') + return result + + +def crop_controlnet(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad): + if "control" not in cond_dict: + return + c = cond_dict["control"] + controlnet = c.copy() + cond_dict["control"] = controlnet + while c is not None: + # hint is shape (B, C, H, W) + hint = controlnet.cond_hint_original + resized_crop = resize_region(region, canvas_size, hint.shape[:-3:-1]) + hint = crop_tensor(hint.movedim(1, -1), resized_crop).movedim(-1, 1) + hint = resize_tensor(hint, tile_size[::-1]) + controlnet.cond_hint_original = hint + c = c.previous_controlnet + controlnet.set_previous_controlnet(c.copy() if c is not None else None) + controlnet = controlnet.previous_controlnet + + +def region_intersection(region1, region2): + """ + Returns the coordinates of the intersection of two rectangular regions. + :param region1: A tuple of the form (x1, y1, x2, y2) denoting the upper left and the lower right points + of the first rectangular region. Expected to have x2 > x1 and y2 > y1. + :param region2: The second rectangular region with the same format as the first. + :return: A tuple of the form (x1, y1, x2, y2) denoting the rectangular intersection. + None if there is no intersection. + """ + x1, y1, x2, y2 = region1 + x1_, y1_, x2_, y2_ = region2 + x1 = max(x1, x1_) + y1 = max(y1, y1_) + x2 = min(x2, x2_) + y2 = min(y2, y2_) + if x1 >= x2 or y1 >= y2: + return None + return (x1, y1, x2, y2) + + +def crop_gligen(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad): + if "gligen" not in cond_dict: + return + type, model, cond = cond_dict["gligen"] + if type != "position": + from warnings import warn + warn(f"Unknown gligen type {type}") + return + cropped = [] + for c in cond: + emb, h, w, y, x = c + # Get the coordinates of the box in the upscaled image + x1 = x * 8 + y1 = y * 8 + x2 = x1 + w * 8 + y2 = y1 + h * 8 + gligen_upscaled_box = resize_region((x1, y1, x2, y2), init_size, canvas_size) + + # Calculate the intersection of the gligen box and the region + intersection = region_intersection(gligen_upscaled_box, region) + if intersection is None: + continue + x1, y1, x2, y2 = intersection + + # Offset the gligen box so that the origin is at the top left of the tile region + x1 -= region[0] + y1 -= region[1] + x2 -= region[0] + y2 -= region[1] + + # Add the padding + x1 += w_pad + y1 += h_pad + x2 += w_pad + y2 += h_pad + + # Set the new position params + h = (y2 - y1) // 8 + w = (x2 - x1) // 8 + x = x1 // 8 + y = y1 // 8 + cropped.append((emb, h, w, y, x)) + + cond_dict["gligen"] = (type, model, cropped) + + +def crop_area(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad): + if "area" not in cond_dict: + return + + # Resize the area conditioning to the canvas size and confine it to the tile region + h, w, y, x = cond_dict["area"] + w, h, x, y = 8 * w, 8 * h, 8 * x, 8 * y + x1, y1, x2, y2 = resize_region((x, y, x + w, y + h), init_size, canvas_size) + intersection = region_intersection((x1, y1, x2, y2), region) + if intersection is None: + del cond_dict["area"] + del cond_dict["strength"] + return + x1, y1, x2, y2 = intersection + + # Offset origin to the top left of the tile + x1 -= region[0] + y1 -= region[1] + x2 -= region[0] + y2 -= region[1] + + # Add the padding + x1 += w_pad + y1 += h_pad + x2 += w_pad + y2 += h_pad + + # Set the params for tile + w, h = (x2 - x1) // 8, (y2 - y1) // 8 + x, y = x1 // 8, y1 // 8 + + cond_dict["area"] = (h, w, y, x) + + +def crop_mask(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad): + if "mask" not in cond_dict: + return + mask_tensor = cond_dict["mask"] # (B, H, W) + masks = [] + for i in range(mask_tensor.shape[0]): + # Convert to PIL image + mask = tensor_to_pil(mask_tensor, i) # W x H + + # Resize the mask to the canvas size + mask = mask.resize(canvas_size, Image.Resampling.BICUBIC) + + # Crop the mask to the region + mask = mask.crop(region) + + # Add padding + mask, _ = resize_and_pad_image(mask, tile_size[0], tile_size[1], fill=True) + + # Resize the mask to the tile size + if tile_size != mask.size: + mask = mask.resize(tile_size, Image.Resampling.BICUBIC) + + # Convert back to tensor + mask = pil_to_tensor(mask) # (1, H, W, 1) + mask = mask.squeeze(-1) # (1, H, W) + masks.append(mask) + + cond_dict["mask"] = torch.cat(masks, dim=0) # (B, H, W) + + +def crop_cond(cond, region, init_size, canvas_size, tile_size, w_pad=0, h_pad=0): + cropped = [] + for emb, x in cond: + cond_dict = x.copy() + n = [emb, cond_dict] + crop_controlnet(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad) + crop_gligen(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad) + crop_area(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad) + crop_mask(cond_dict, region, init_size, canvas_size, tile_size, w_pad, h_pad) + cropped.append(n) + return cropped diff --git a/custom_nodes/ComfyUI_essentials/LICENSE b/custom_nodes/ComfyUI_essentials/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..948b5e4192d70b665b15bb5a917bd98b3771eb4b --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023 Matteo Spinelli + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/custom_nodes/ComfyUI_essentials/README.md b/custom_nodes/ComfyUI_essentials/README.md new file mode 100644 index 0000000000000000000000000000000000000000..dd9ce93fead9bbae6b029084d1847f9873aabbc6 --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/README.md @@ -0,0 +1,30 @@ +# :wrench: ComfyUI Essentials + +Essential nodes that are weirdly missing from ComfyUI core. With few exceptions they are new features and not commodities. I hope this will be just a temporary repository until the nodes get included into ComfyUI. + +## Node list + +- Get Image Size +- Image Resize (adds "keep proportions" to scale image) +- Image Crop (includes auto crop to all sides and center) +- Image Flip +- Image Desaturate +- Image Posterize +- Image Contrast Adaptive Sharpening +- Image Enhance Difference +- Image Expand Batch, expands an image batch to a given size repeating the images uniformly +- Mask Blur +- Mask Flip +- ~~Mask Grow / Shrink (same as Mask grow but adds shrink)~~ (this was recently added in the official repo) +- Mask Preview +- Mask Batch, same as Image batch but for masks +- Mask Expand Batch, expands a mask batch to a given size repeating the masks uniformly +- Mask From Color +- Transition Mask, creates a transition with series of masks, useful for animations +- Simple Math +- Console Debug (outputs any input to console) +- Model Compile, will hurt your feelings. It basically compiles the model with torch.compile. It takes a few minutes to compile but generation is faster. Only works on Linux and Mac (maybe WLS I dunno) +- TODO: Mask Save +- TODO: documentation + +Let me know if anything's missing diff --git a/custom_nodes/ComfyUI_essentials/__init__.py b/custom_nodes/ComfyUI_essentials/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..cf2906eb1d57932adbb306f72a65cc8b5798889e --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/__init__.py @@ -0,0 +1,3 @@ +from .essentials import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS + +__all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS'] diff --git a/custom_nodes/ComfyUI_essentials/carve.py b/custom_nodes/ComfyUI_essentials/carve.py new file mode 100644 index 0000000000000000000000000000000000000000..017e804d84324fde3d4004a3ed832676b8237985 --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/carve.py @@ -0,0 +1,454 @@ +# MIT licensed code from https://github.com/li-plus/seam-carving/ + +from enum import Enum +from typing import Optional, Tuple + +import numba as nb +import numpy as np +from scipy.ndimage import sobel + +DROP_MASK_ENERGY = 1e5 +KEEP_MASK_ENERGY = 1e3 + + +class OrderMode(str, Enum): + WIDTH_FIRST = "width-first" + HEIGHT_FIRST = "height-first" + + +class EnergyMode(str, Enum): + FORWARD = "forward" + BACKWARD = "backward" + + +def _list_enum(enum_class) -> Tuple: + return tuple(x.value for x in enum_class) + + +def _rgb2gray(rgb: np.ndarray) -> np.ndarray: + """Convert an RGB image to a grayscale image""" + coeffs = np.array([0.2125, 0.7154, 0.0721], dtype=np.float32) + return (rgb @ coeffs).astype(rgb.dtype) + + +def _get_seam_mask(src: np.ndarray, seam: np.ndarray) -> np.ndarray: + """Convert a list of seam column indices to a mask""" + return np.eye(src.shape[1], dtype=bool)[seam] + + +def _remove_seam_mask(src: np.ndarray, seam_mask: np.ndarray) -> np.ndarray: + """Remove a seam from the source image according to the given seam_mask""" + if src.ndim == 3: + h, w, c = src.shape + seam_mask = np.broadcast_to(seam_mask[:, :, None], src.shape) + dst = src[~seam_mask].reshape((h, w - 1, c)) + else: + h, w = src.shape + dst = src[~seam_mask].reshape((h, w - 1)) + return dst + + +def _get_energy(gray: np.ndarray) -> np.ndarray: + """Get backward energy map from the source image""" + assert gray.ndim == 2 + + gray = gray.astype(np.float32) + grad_x = sobel(gray, axis=1) + grad_y = sobel(gray, axis=0) + energy = np.abs(grad_x) + np.abs(grad_y) + return energy + + +@nb.njit(nb.int32[:](nb.float32[:, :]), cache=True) +def _get_backward_seam(energy: np.ndarray) -> np.ndarray: + """Compute the minimum vertical seam from the backward energy map""" + h, w = energy.shape + inf = np.array([np.inf], dtype=np.float32) + cost = np.concatenate((inf, energy[0], inf)) + parent = np.empty((h, w), dtype=np.int32) + base_idx = np.arange(-1, w - 1, dtype=np.int32) + + for r in range(1, h): + choices = np.vstack((cost[:-2], cost[1:-1], cost[2:])) + min_idx = np.argmin(choices, axis=0) + base_idx + parent[r] = min_idx + cost[1:-1] = cost[1:-1][min_idx] + energy[r] + + c = np.argmin(cost[1:-1]) + seam = np.empty(h, dtype=np.int32) + for r in range(h - 1, -1, -1): + seam[r] = c + c = parent[r, c] + + return seam + + +def _get_backward_seams( + gray: np.ndarray, num_seams: int, aux_energy: Optional[np.ndarray] +) -> np.ndarray: + """Compute the minimum N vertical seams using backward energy""" + h, w = gray.shape + seams = np.zeros((h, w), dtype=bool) + rows = np.arange(h, dtype=np.int32) + idx_map = np.broadcast_to(np.arange(w, dtype=np.int32), (h, w)) + energy = _get_energy(gray) + if aux_energy is not None: + energy += aux_energy + for _ in range(num_seams): + seam = _get_backward_seam(energy) + seams[rows, idx_map[rows, seam]] = True + + seam_mask = _get_seam_mask(gray, seam) + gray = _remove_seam_mask(gray, seam_mask) + idx_map = _remove_seam_mask(idx_map, seam_mask) + if aux_energy is not None: + aux_energy = _remove_seam_mask(aux_energy, seam_mask) + + # Only need to re-compute the energy in the bounding box of the seam + _, cur_w = energy.shape + lo = max(0, np.min(seam) - 1) + hi = min(cur_w, np.max(seam) + 1) + pad_lo = 1 if lo > 0 else 0 + pad_hi = 1 if hi < cur_w - 1 else 0 + mid_block = gray[:, lo - pad_lo : hi + pad_hi] + _, mid_w = mid_block.shape + mid_energy = _get_energy(mid_block)[:, pad_lo : mid_w - pad_hi] + if aux_energy is not None: + mid_energy += aux_energy[:, lo:hi] + energy = np.hstack((energy[:, :lo], mid_energy, energy[:, hi + 1 :])) + + return seams + + +@nb.njit( + [ + nb.int32[:](nb.float32[:, :], nb.none), + nb.int32[:](nb.float32[:, :], nb.float32[:, :]), + ], + cache=True, +) +def _get_forward_seam(gray: np.ndarray, aux_energy: Optional[np.ndarray]) -> np.ndarray: + """Compute the minimum vertical seam using forward energy""" + h, w = gray.shape + + gray = np.hstack((gray[:, :1], gray, gray[:, -1:])) + + inf = np.array([np.inf], dtype=np.float32) + dp = np.concatenate((inf, np.abs(gray[0, 2:] - gray[0, :-2]), inf)) + + parent = np.empty((h, w), dtype=np.int32) + base_idx = np.arange(-1, w - 1, dtype=np.int32) + + inf = np.array([np.inf], dtype=np.float32) + for r in range(1, h): + curr_shl = gray[r, 2:] + curr_shr = gray[r, :-2] + cost_mid = np.abs(curr_shl - curr_shr) + if aux_energy is not None: + cost_mid += aux_energy[r] + + prev_mid = gray[r - 1, 1:-1] + cost_left = cost_mid + np.abs(prev_mid - curr_shr) + cost_right = cost_mid + np.abs(prev_mid - curr_shl) + + dp_mid = dp[1:-1] + dp_left = dp[:-2] + dp_right = dp[2:] + + choices = np.vstack( + (cost_left + dp_left, cost_mid + dp_mid, cost_right + dp_right) + ) + min_idx = np.argmin(choices, axis=0) + parent[r] = min_idx + base_idx + # numba does not support specifying axis in np.min, below loop is equivalent to: + # `dp_mid[:] = np.min(choices, axis=0)` or `dp_mid[:] = choices[min_idx, np.arange(w)]` + for j, i in enumerate(min_idx): + dp_mid[j] = choices[i, j] + + c = np.argmin(dp[1:-1]) + seam = np.empty(h, dtype=np.int32) + for r in range(h - 1, -1, -1): + seam[r] = c + c = parent[r, c] + + return seam + + +def _get_forward_seams( + gray: np.ndarray, num_seams: int, aux_energy: Optional[np.ndarray] +) -> np.ndarray: + """Compute minimum N vertical seams using forward energy""" + h, w = gray.shape + seams = np.zeros((h, w), dtype=bool) + rows = np.arange(h, dtype=np.int32) + idx_map = np.broadcast_to(np.arange(w, dtype=np.int32), (h, w)) + for _ in range(num_seams): + seam = _get_forward_seam(gray, aux_energy) + seams[rows, idx_map[rows, seam]] = True + seam_mask = _get_seam_mask(gray, seam) + gray = _remove_seam_mask(gray, seam_mask) + idx_map = _remove_seam_mask(idx_map, seam_mask) + if aux_energy is not None: + aux_energy = _remove_seam_mask(aux_energy, seam_mask) + + return seams + + +def _get_seams( + gray: np.ndarray, num_seams: int, energy_mode: str, aux_energy: Optional[np.ndarray] +) -> np.ndarray: + """Get the minimum N seams from the grayscale image""" + gray = np.asarray(gray, dtype=np.float32) + if energy_mode == EnergyMode.BACKWARD: + return _get_backward_seams(gray, num_seams, aux_energy) + elif energy_mode == EnergyMode.FORWARD: + return _get_forward_seams(gray, num_seams, aux_energy) + else: + raise ValueError( + f"expect energy_mode to be one of {_list_enum(EnergyMode)}, got {energy_mode}" + ) + + +def _reduce_width( + src: np.ndarray, + delta_width: int, + energy_mode: str, + aux_energy: Optional[np.ndarray], +) -> Tuple[np.ndarray, Optional[np.ndarray]]: + """Reduce the width of image by delta_width pixels""" + assert src.ndim in (2, 3) and delta_width >= 0 + if src.ndim == 2: + gray = src + src_h, src_w = src.shape + dst_shape: Tuple[int, ...] = (src_h, src_w - delta_width) + else: + gray = _rgb2gray(src) + src_h, src_w, src_c = src.shape + dst_shape = (src_h, src_w - delta_width, src_c) + + to_keep = ~_get_seams(gray, delta_width, energy_mode, aux_energy) + dst = src[to_keep].reshape(dst_shape) + if aux_energy is not None: + aux_energy = aux_energy[to_keep].reshape(dst_shape[:2]) + return dst, aux_energy + + +@nb.njit( + nb.float32[:, :, :](nb.float32[:, :, :], nb.boolean[:, :], nb.int32), cache=True +) +def _insert_seams_kernel( + src: np.ndarray, seams: np.ndarray, delta_width: int +) -> np.ndarray: + """The numba kernel for inserting seams""" + src_h, src_w, src_c = src.shape + dst = np.empty((src_h, src_w + delta_width, src_c), dtype=src.dtype) + for row in range(src_h): + dst_col = 0 + for src_col in range(src_w): + if seams[row, src_col]: + left = src[row, max(src_col - 1, 0)] + right = src[row, src_col] + dst[row, dst_col] = (left + right) / 2 + dst_col += 1 + dst[row, dst_col] = src[row, src_col] + dst_col += 1 + return dst + + +def _insert_seams(src: np.ndarray, seams: np.ndarray, delta_width: int) -> np.ndarray: + """Insert multiple seams into the source image""" + dst = src.astype(np.float32) + if dst.ndim == 2: + dst = dst[:, :, None] + dst = _insert_seams_kernel(dst, seams, delta_width).astype(src.dtype) + if src.ndim == 2: + dst = dst.squeeze(-1) + return dst + + +def _expand_width( + src: np.ndarray, + delta_width: int, + energy_mode: str, + aux_energy: Optional[np.ndarray], + step_ratio: float, +) -> Tuple[np.ndarray, Optional[np.ndarray]]: + """Expand the width of image by delta_width pixels""" + assert src.ndim in (2, 3) and delta_width >= 0 + if not 0 < step_ratio <= 1: + raise ValueError(f"expect `step_ratio` to be between (0,1], got {step_ratio}") + + dst = src + while delta_width > 0: + max_step_size = max(1, round(step_ratio * dst.shape[1])) + step_size = min(max_step_size, delta_width) + gray = dst if dst.ndim == 2 else _rgb2gray(dst) + seams = _get_seams(gray, step_size, energy_mode, aux_energy) + dst = _insert_seams(dst, seams, step_size) + if aux_energy is not None: + aux_energy = _insert_seams(aux_energy, seams, step_size) + delta_width -= step_size + + return dst, aux_energy + + +def _resize_width( + src: np.ndarray, + width: int, + energy_mode: str, + aux_energy: Optional[np.ndarray], + step_ratio: float, +) -> Tuple[np.ndarray, Optional[np.ndarray]]: + """Resize the width of image by removing vertical seams""" + assert src.size > 0 and src.ndim in (2, 3) + assert width > 0 + + src_w = src.shape[1] + if src_w < width: + dst, aux_energy = _expand_width( + src, width - src_w, energy_mode, aux_energy, step_ratio + ) + else: + dst, aux_energy = _reduce_width(src, src_w - width, energy_mode, aux_energy) + return dst, aux_energy + + +def _transpose_image(src: np.ndarray) -> np.ndarray: + """Transpose a source image in rgb or grayscale format""" + if src.ndim == 3: + dst = src.transpose((1, 0, 2)) + else: + dst = src.T + return dst + + +def _resize_height( + src: np.ndarray, + height: int, + energy_mode: str, + aux_energy: Optional[np.ndarray], + step_ratio: float, +) -> Tuple[np.ndarray, Optional[np.ndarray]]: + """Resize the height of image by removing horizontal seams""" + assert src.ndim in (2, 3) and height > 0 + if aux_energy is not None: + aux_energy = aux_energy.T + src = _transpose_image(src) + src, aux_energy = _resize_width(src, height, energy_mode, aux_energy, step_ratio) + src = _transpose_image(src) + if aux_energy is not None: + aux_energy = aux_energy.T + return src, aux_energy + + +def _check_mask(mask: np.ndarray, shape: Tuple[int, ...]) -> np.ndarray: + """Ensure the mask to be a 2D grayscale map of specific shape""" + mask = np.asarray(mask, dtype=bool) + if mask.ndim != 2: + raise ValueError(f"expect mask to be a 2d binary map, got shape {mask.shape}") + if mask.shape != shape: + raise ValueError( + f"expect the shape of mask to match the image, got {mask.shape} vs {shape}" + ) + return mask + + +def _check_src(src: np.ndarray) -> np.ndarray: + """Ensure the source to be RGB or grayscale""" + src = np.asarray(src) + if src.size == 0 or src.ndim not in (2, 3): + raise ValueError( + f"expect a 3d rgb image or a 2d grayscale image, got image in shape {src.shape}" + ) + return src + + +def seam_carving( + src: np.ndarray, + size: Optional[Tuple[int, int]] = None, + energy_mode: str = "backward", + order: str = "width-first", + keep_mask: Optional[np.ndarray] = None, + drop_mask: Optional[np.ndarray] = None, + step_ratio: float = 0.5, +) -> np.ndarray: + """Resize the image using the content-aware seam-carving algorithm. + + :param src: A source image in RGB or grayscale format. + :param size: The target size in pixels, as a 2-tuple (width, height). + :param energy_mode: Policy to compute energy for the source image. Could be + one of ``backward`` or ``forward``. If ``backward``, compute the energy + as the gradient at each pixel. If ``forward``, compute the energy as the + distances between adjacent pixels after each pixel is removed. + :param order: The order to remove horizontal and vertical seams. Could be + one of ``width-first`` or ``height-first``. In ``width-first`` mode, we + remove or insert all vertical seams first, then the horizontal ones, + while ``height-first`` is the opposite. + :param keep_mask: An optional mask where the foreground is protected from + seam removal. If not specified, no area will be protected. + :param drop_mask: An optional binary object mask to remove. If given, the + object will be removed before resizing the image to the target size. + :param step_ratio: The maximum size expansion ratio in one seam carving step. + The image will be expanded in multiple steps if target size is too large. + :return: A resized copy of the source image. + """ + src = _check_src(src) + + if order not in _list_enum(OrderMode): + raise ValueError( + f"expect order to be one of {_list_enum(OrderMode)}, got {order}" + ) + + aux_energy = None + + if keep_mask is not None: + keep_mask = _check_mask(keep_mask, src.shape[:2]) + + aux_energy = np.zeros(src.shape[:2], dtype=np.float32) + aux_energy[keep_mask] += KEEP_MASK_ENERGY + + # remove object if `drop_mask` is given + if drop_mask is not None: + drop_mask = _check_mask(drop_mask, src.shape[:2]) + + if aux_energy is None: + aux_energy = np.zeros(src.shape[:2], dtype=np.float32) + aux_energy[drop_mask] -= DROP_MASK_ENERGY + + if order == OrderMode.HEIGHT_FIRST: + src = _transpose_image(src) + aux_energy = aux_energy.T + + num_seams = (aux_energy < 0).sum(1).max() + while num_seams > 0: + src, aux_energy = _reduce_width(src, num_seams, energy_mode, aux_energy) + num_seams = (aux_energy < 0).sum(1).max() + + if order == OrderMode.HEIGHT_FIRST: + src = _transpose_image(src) + aux_energy = aux_energy.T + + # resize image if `size` is given + if size is not None: + width, height = size + width = round(width) + height = round(height) + if width <= 0 or height <= 0: + raise ValueError(f"expect target size to be positive, got {size}") + + if order == OrderMode.WIDTH_FIRST: + src, aux_energy = _resize_width( + src, width, energy_mode, aux_energy, step_ratio + ) + src, aux_energy = _resize_height( + src, height, energy_mode, aux_energy, step_ratio + ) + else: + src, aux_energy = _resize_height( + src, height, energy_mode, aux_energy, step_ratio + ) + src, aux_energy = _resize_width( + src, width, energy_mode, aux_energy, step_ratio + ) + + return src diff --git a/custom_nodes/ComfyUI_essentials/essentials.py b/custom_nodes/ComfyUI_essentials/essentials.py new file mode 100644 index 0000000000000000000000000000000000000000..742e745b25ffa391508e1eccb2755a88de756116 --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/essentials.py @@ -0,0 +1,1576 @@ +import warnings +warnings.filterwarnings('ignore', module="torchvision") +import ast +import math +import random +import os +import operator as op +import numpy as np +from PIL import Image, ImageDraw, ImageFont, ImageColor, ImageFilter +import io + +import torch +import torch.nn.functional as F +import torchvision.transforms.v2 as T + +from nodes import MAX_RESOLUTION, SaveImage, common_ksampler +import folder_paths +import comfy.utils +import comfy.samplers + +STOCHASTIC_SAMPLERS = ["euler_ancestral", "dpm_2_ancestral", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm"] + +def p(image): + return image.permute([0,3,1,2]) +def pb(image): + return image.permute([0,2,3,1]) + +# from https://github.com/pythongosssss/ComfyUI-Custom-Scripts +class AnyType(str): + def __ne__(self, __value: object) -> bool: + return False +any = AnyType("*") + +EPSILON = 1e-5 + +class GetImageSize: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + } + } + + RETURN_TYPES = ("INT", "INT") + RETURN_NAMES = ("width", "height") + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image): + return (image.shape[2], image.shape[1],) + +class ImageResize: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "width": ("INT", { "default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, }), + "height": ("INT", { "default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, }), + "interpolation": (["nearest", "bilinear", "bicubic", "area", "nearest-exact", "lanczos"],), + "keep_proportion": ("BOOLEAN", { "default": False }), + "condition": (["always", "only if bigger", "only if smaller"],), + } + } + + RETURN_TYPES = ("IMAGE", "INT", "INT",) + RETURN_NAMES = ("IMAGE", "width", "height",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, width, height, keep_proportion, interpolation="nearest", condition="always"): + if keep_proportion is True: + _, oh, ow, _ = image.shape + + if width == 0 and oh < height: + width = MAX_RESOLUTION + elif width == 0 and oh >= height: + width = ow + + if height == 0 and ow < width: + height = MAX_RESOLUTION + elif height == 0 and ow >= width: + height = ow + + #width = ow if width == 0 else width + #height = oh if height == 0 else height + ratio = min(width / ow, height / oh) + width = round(ow*ratio) + height = round(oh*ratio) + + outputs = p(image) + + if "always" in condition or ("bigger" in condition and (oh > height or ow > width)) or ("smaller" in condition and (oh < height or ow < width)): + if interpolation == "lanczos": + outputs = comfy.utils.lanczos(outputs, width, height) + else: + outputs = F.interpolate(outputs, size=(height, width), mode=interpolation) + + outputs = pb(outputs) + + return(outputs, outputs.shape[2], outputs.shape[1],) + +class ImageFlip: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "axis": (["x", "y", "xy"],), + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, axis): + dim = () + if "y" in axis: + dim += (1,) + if "x" in axis: + dim += (2,) + image = torch.flip(image, dim) + + return(image,) + +class ImageCrop: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "width": ("INT", { "default": 256, "min": 0, "max": MAX_RESOLUTION, "step": 8, }), + "height": ("INT", { "default": 256, "min": 0, "max": MAX_RESOLUTION, "step": 8, }), + "position": (["top-left", "top-center", "top-right", "right-center", "bottom-right", "bottom-center", "bottom-left", "left-center", "center"],), + "x_offset": ("INT", { "default": 0, "min": -99999, "step": 1, }), + "y_offset": ("INT", { "default": 0, "min": -99999, "step": 1, }), + } + } + + RETURN_TYPES = ("IMAGE","INT","INT",) + RETURN_NAMES = ("IMAGE","x","y",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, width, height, position, x_offset, y_offset): + _, oh, ow, _ = image.shape + + width = min(ow, width) + height = min(oh, height) + + if "center" in position: + x = round((ow-width) / 2) + y = round((oh-height) / 2) + if "top" in position: + y = 0 + if "bottom" in position: + y = oh-height + if "left" in position: + x = 0 + if "right" in position: + x = ow-width + + x += x_offset + y += y_offset + + x2 = x+width + y2 = y+height + + if x2 > ow: + x2 = ow + if x < 0: + x = 0 + if y2 > oh: + y2 = oh + if y < 0: + y = 0 + + image = image[:, y:y2, x:x2, :] + + return(image, x, y, ) + +class ImageDesaturate: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "factor": ("FLOAT", { "default": 1.00, "min": 0.00, "max": 1.00, "step": 0.05, }), + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, factor): + grayscale = 0.299 * image[..., 0] + 0.587 * image[..., 1] + 0.114 * image[..., 2] + grayscale = (1.0 - factor) * image + factor * grayscale.unsqueeze(-1).repeat(1, 1, 1, 3) + return(grayscale,) + +class ImagePosterize: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "threshold": ("FLOAT", { "default": 0.50, "min": 0.00, "max": 1.00, "step": 0.05, }), + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, threshold): + image = 0.299 * image[..., 0] + 0.587 * image[..., 1] + 0.114 * image[..., 2] + #image = image.mean(dim=3, keepdim=True) + image = (image > threshold).float() + image = image.unsqueeze(-1).repeat(1, 1, 1, 3) + + return(image,) + +class ImageEnhanceDifference: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image1": ("IMAGE",), + "image2": ("IMAGE",), + "exponent": ("FLOAT", { "default": 0.75, "min": 0.00, "max": 1.00, "step": 0.05, }), + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image1, image2, exponent): + if image1.shape != image2.shape: + image2 = p(image2) + image2 = comfy.utils.common_upscale(image2, image1.shape[2], image1.shape[1], upscale_method='bicubic', crop='center') + image2 = pb(image2) + + diff_image = image1 - image2 + diff_image = torch.pow(diff_image, exponent) + diff_image = torch.clamp(diff_image, 0, 1) + + return(diff_image,) + +class ImageExpandBatch: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "size": ("INT", { "default": 16, "min": 1, "step": 1, }), + "method": (["expand", "repeat all", "repeat first", "repeat last"],) + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, size, method): + orig_size = image.shape[0] + + if orig_size == size: + return (image,) + + if size <= 1: + return (image[:size],) + + if 'expand' in method: + out = torch.empty([size] + list(image.shape)[1:], dtype=image.dtype, device=image.device) + if size < orig_size: + scale = (orig_size - 1) / (size - 1) + for i in range(size): + out[i] = image[min(round(i * scale), orig_size - 1)] + else: + scale = orig_size / size + for i in range(size): + out[i] = image[min(math.floor((i + 0.5) * scale), orig_size - 1)] + elif 'all' in method: + out = image.repeat([math.ceil(size / image.shape[0])] + [1] * (len(image.shape) - 1))[:size] + elif 'first' in method: + if size < image.shape[0]: + out = image[:size] + else: + out = torch.cat([image[:1].repeat(size-image.shape[0], 1, 1, 1), image], dim=0) + elif 'last' in method: + if size < image.shape[0]: + out = image[:size] + else: + out = torch.cat((image, image[-1:].repeat((size-image.shape[0], 1, 1, 1))), dim=0) + + return (out,) + +class ExtractKeyframes: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "threshold": ("FLOAT", { "default": 0.85, "min": 0.00, "max": 1.00, "step": 0.01, }), + } + } + + RETURN_TYPES = ("IMAGE", "STRING") + RETURN_NAMES = ("KEYFRAMES", "indexes") + + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, threshold): + window_size = 2 + + variations = torch.sum(torch.abs(image[1:] - image[:-1]), dim=[1, 2, 3]) + #variations = torch.sum((image[1:] - image[:-1]) ** 2, dim=[1, 2, 3]) + threshold = torch.quantile(variations.float(), threshold).item() + + keyframes = [] + for i in range(image.shape[0] - window_size + 1): + window = image[i:i + window_size] + variation = torch.sum(torch.abs(window[-1] - window[0])).item() + + if variation > threshold: + keyframes.append(i + window_size - 1) + + return (image[keyframes], ','.join(map(str, keyframes)),) + +class MaskFlip: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + "axis": (["x", "y", "xy"],), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, mask, axis): + dim = () + if "y" in axis: + dim += (1,) + if "x" in axis: + dim += (2,) + mask = torch.flip(mask, dims=dim) + + return(mask,) + +class MaskBlur: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + "amount": ("FLOAT", { "default": 6.0, "min": 0, "step": 0.5, }), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, mask, amount): + size = int(6 * amount +1) + if size % 2 == 0: + size+= 1 + + blurred = mask.unsqueeze(1) + blurred = T.GaussianBlur(size, amount)(blurred) + blurred = blurred.squeeze(1) + + return(blurred,) + +class MaskPreview(SaveImage): + def __init__(self): + self.output_dir = folder_paths.get_temp_directory() + self.type = "temp" + self.prefix_append = "_temp_" + ''.join(random.choice("abcdefghijklmnopqrstupvxyz") for x in range(5)) + self.compress_level = 4 + + @classmethod + def INPUT_TYPES(s): + return { + "required": {"mask": ("MASK",), }, + "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"}, + } + + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, mask, filename_prefix="ComfyUI", prompt=None, extra_pnginfo=None): + preview = mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])).movedim(1, -1).expand(-1, -1, -1, 3) + return self.save_images(preview, filename_prefix, prompt, extra_pnginfo) + +class MaskBatch: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask1": ("MASK",), + "mask2": ("MASK",), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, mask1, mask2): + if mask1.shape[1:] != mask2.shape[1:]: + mask2 = F.interpolate(mask2.unsqueeze(1), size=(mask1.shape[1], mask1.shape[2]), mode="bicubic").squeeze(1) + + out = torch.cat((mask1, mask2), dim=0) + return (out,) + +class MaskExpandBatch: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK",), + "size": ("INT", { "default": 16, "min": 1, "step": 1, }), + "method": (["expand", "repeat all", "repeat first", "repeat last"],) + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, mask, size, method): + orig_size = mask.shape[0] + + if orig_size == size: + return (mask,) + + if size <= 1: + return (mask[:size],) + + if 'expand' in method: + out = torch.empty([size] + list(mask.shape)[1:], dtype=mask.dtype, device=mask.device) + if size < orig_size: + scale = (orig_size - 1) / (size - 1) + for i in range(size): + out[i] = mask[min(round(i * scale), orig_size - 1)] + else: + scale = orig_size / size + for i in range(size): + out[i] = mask[min(math.floor((i + 0.5) * scale), orig_size - 1)] + elif 'all' in method: + out = mask.repeat([math.ceil(size / mask.shape[0])] + [1] * (len(mask.shape) - 1))[:size] + elif 'first' in method: + if size < mask.shape[0]: + out = mask[:size] + else: + out = torch.cat([mask[:1].repeat(size-mask.shape[0], 1, 1), mask], dim=0) + elif 'last' in method: + if size < mask.shape[0]: + out = mask[:size] + else: + out = torch.cat((mask, mask[-1:].repeat((size-mask.shape[0], 1, 1))), dim=0) + + return (out,) + +def cubic_bezier(t, p): + p0, p1, p2, p3 = p + return (1 - t)**3 * p0 + 3 * (1 - t)**2 * t * p1 + 3 * (1 - t) * t**2 * p2 + t**3 * p3 + +class MaskFromColor: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE", ), + "red": ("INT", { "default": 255, "min": 0, "max": 255, "step": 1, }), + "green": ("INT", { "default": 255, "min": 0, "max": 255, "step": 1, }), + "blue": ("INT", { "default": 255, "min": 0, "max": 255, "step": 1, }), + "threshold": ("INT", { "default": 0, "min": 0, "max": 127, "step": 1, }), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, red, green, blue, threshold): + temp = (torch.clamp(image, 0, 1.0) * 255.0).round().to(torch.int) + color = torch.tensor([red, green, blue]) + lower_bound = (color - threshold).clamp(min=0) + upper_bound = (color + threshold).clamp(max=255) + lower_bound = lower_bound.view(1, 1, 1, 3) + upper_bound = upper_bound.view(1, 1, 1, 3) + mask = (temp >= lower_bound) & (temp <= upper_bound) + mask = mask.all(dim=-1) + mask = mask.float() + + return (mask, ) + +class MaskFromBatch: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "mask": ("MASK", ), + "start": ("INT", { "default": 0, "min": 0, "step": 1, }), + "length": ("INT", { "default": -1, "min": -1, "step": 1, }), + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, mask, start, length): + if length<0: + length = mask.shape[0] + start = min(start, mask.shape[0]-1) + length = min(mask.shape[0]-start, length) + return (mask[start:start + length], ) + +class ImageFromBatch: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE", ), + "start": ("INT", { "default": 0, "min": 0, "step": 1, }), + "length": ("INT", { "default": -1, "min": -1, "step": 1, }), + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, start, length): + if length<0: + length = image.shape[0] + start = min(start, image.shape[0]-1) + length = min(image.shape[0]-start, length) + return (image[start:start + length], ) + +class ImageCompositeFromMaskBatch: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image_from": ("IMAGE", ), + "image_to": ("IMAGE", ), + "mask": ("MASK", ) + } + } + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image_from, image_to, mask): + frames = mask.shape[0] + + if image_from.shape[1] != image_to.shape[1] or image_from.shape[2] != image_to.shape[2]: + image_to = p(image_to) + image_to = comfy.utils.common_upscale(image_to, image_from.shape[2], image_from.shape[1], upscale_method='bicubic', crop='center') + image_to = pb(image_to) + + if frames < image_from.shape[0]: + image_from = image_from[:frames] + elif frames > image_from.shape[0]: + image_from = torch.cat((image_from, image_from[-1].unsqueeze(0).repeat(frames-image_from.shape[0], 1, 1, 1)), dim=0) + + mask = mask.unsqueeze(3).repeat(1, 1, 1, 3) + + if image_from.shape[1] != mask.shape[1] or image_from.shape[2] != mask.shape[2]: + mask = p(mask) + mask = comfy.utils.common_upscale(mask, image_from.shape[2], image_from.shape[1], upscale_method='bicubic', crop='center') + mask = pb(mask) + + out = mask * image_to + (1 - mask) * image_from + + return (out, ) + +class TransitionMask: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "width": ("INT", { "default": 512, "min": 1, "max": MAX_RESOLUTION, "step": 1, }), + "height": ("INT", { "default": 512, "min": 1, "max": MAX_RESOLUTION, "step": 1, }), + "frames": ("INT", { "default": 16, "min": 1, "max": 9999, "step": 1, }), + "start_frame": ("INT", { "default": 0, "min": 0, "step": 1, }), + "end_frame": ("INT", { "default": 9999, "min": 0, "step": 1, }), + "transition_type": (["horizontal slide", "vertical slide", "horizontal bar", "vertical bar", "center box", "horizontal door", "vertical door", "circle", "fade"],), + "timing_function": (["linear", "in", "out", "in-out"],) + } + } + + RETURN_TYPES = ("MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, width, height, frames, start_frame, end_frame, transition_type, timing_function): + if timing_function == 'in': + tf = [0.0, 0.0, 0.5, 1.0] + elif timing_function == 'out': + tf = [0.0, 0.5, 1.0, 1.0] + elif timing_function == 'in-out': + tf = [0, 1, 0, 1] + #elif timing_function == 'back': + # tf = [0, 1.334, 1.334, 0] + else: + tf = [0, 0, 1, 1] + + out = [] + + end_frame = min(frames, end_frame) + transition = end_frame - start_frame + + if start_frame > 0: + out = out + [torch.full((height, width), 0.0, dtype=torch.float32, device="cpu")] * start_frame + + for i in range(transition): + frame = torch.full((height, width), 0.0, dtype=torch.float32, device="cpu") + progress = i/(transition-1) + + if timing_function != 'linear': + progress = cubic_bezier(progress, tf) + + if "horizontal slide" in transition_type: + pos = round(width*progress) + frame[:, :pos] = 1.0 + elif "vertical slide" in transition_type: + pos = round(height*progress) + frame[:pos, :] = 1.0 + elif "box" in transition_type: + box_w = round(width*progress) + box_h = round(height*progress) + x1 = (width - box_w) // 2 + y1 = (height - box_h) // 2 + x2 = x1 + box_w + y2 = y1 + box_h + frame[y1:y2, x1:x2] = 1.0 + elif "circle" in transition_type: + radius = math.ceil(math.sqrt(pow(width,2)+pow(height,2))*progress/2) + c_x = width // 2 + c_y = height // 2 + # is this real life? Am I hallucinating? + x = torch.arange(0, width, dtype=torch.float32, device="cpu") + y = torch.arange(0, height, dtype=torch.float32, device="cpu") + y, x = torch.meshgrid((y, x), indexing="ij") + circle = ((x - c_x) ** 2 + (y - c_y) ** 2) <= (radius ** 2) + frame[circle] = 1.0 + elif "horizontal bar" in transition_type: + bar = round(height*progress) + y1 = (height - bar) // 2 + y2 = y1 + bar + frame[y1:y2, :] = 1.0 + elif "vertical bar" in transition_type: + bar = round(width*progress) + x1 = (width - bar) // 2 + x2 = x1 + bar + frame[:, x1:x2] = 1.0 + elif "horizontal door" in transition_type: + bar = math.ceil(height*progress/2) + if bar > 0: + frame[:bar, :] = 1.0 + frame[-bar:, :] = 1.0 + elif "vertical door" in transition_type: + bar = math.ceil(width*progress/2) + if bar > 0: + frame[:, :bar] = 1.0 + frame[:, -bar:] = 1.0 + elif "fade" in transition_type: + frame[:,:] = progress + + out.append(frame) + + if end_frame < frames: + out = out + [torch.full((height, width), 1.0, dtype=torch.float32, device="cpu")] * (frames - end_frame) + + out = torch.stack(out, dim=0) + + return (out, ) + +def min_(tensor_list): + # return the element-wise min of the tensor list. + x = torch.stack(tensor_list) + mn = x.min(axis=0)[0] + return torch.clamp(mn, min=0) + +def max_(tensor_list): + # return the element-wise max of the tensor list. + x = torch.stack(tensor_list) + mx = x.max(axis=0)[0] + return torch.clamp(mx, max=1) + +# From https://github.com/Jamy-L/Pytorch-Contrast-Adaptive-Sharpening/ +class ImageCAS: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "image": ("IMAGE",), + "amount": ("FLOAT", {"default": 0.8, "min": 0, "max": 1, "step": 0.05}), + }, + } + + RETURN_TYPES = ("IMAGE",) + CATEGORY = "essentials" + FUNCTION = "execute" + + def execute(self, image, amount): + img = F.pad(p(image), pad=(1, 1, 1, 1)).cpu() + + a = img[..., :-2, :-2] + b = img[..., :-2, 1:-1] + c = img[..., :-2, 2:] + d = img[..., 1:-1, :-2] + e = img[..., 1:-1, 1:-1] + f = img[..., 1:-1, 2:] + g = img[..., 2:, :-2] + h = img[..., 2:, 1:-1] + i = img[..., 2:, 2:] + + # Computing contrast + cross = (b, d, e, f, h) + mn = min_(cross) + mx = max_(cross) + + diag = (a, c, g, i) + mn2 = min_(diag) + mx2 = max_(diag) + mx = mx + mx2 + mn = mn + mn2 + + # Computing local weight + inv_mx = torch.reciprocal(mx + EPSILON) + amp = inv_mx * torch.minimum(mn, (2 - mx)) + + # scaling + amp = torch.sqrt(amp) + w = - amp * (amount * (1/5 - 1/8) + 1/8) + div = torch.reciprocal(1 + 4*w) + + output = ((b + d + f + h)*w + e) * div + output = output.clamp(0, 1) + #output = torch.nan_to_num(output) # this seems the only way to ensure there are no NaNs + + output = pb(output) + + return (output,) + +operators = { + ast.Add: op.add, + ast.Sub: op.sub, + ast.Mult: op.mul, + ast.Div: op.truediv, + ast.FloorDiv: op.floordiv, + ast.Pow: op.pow, + ast.BitXor: op.xor, + ast.USub: op.neg, + ast.Mod: op.mod, +} + +op_functions = { + 'min': min, + 'max': max +} + +class SimpleMath: + def __init__(self): + pass + + @classmethod + def INPUT_TYPES(s): + return { + "optional": { + "a": ("INT,FLOAT", { "default": 0.0, "step": 0.1 }), + "b": ("INT,FLOAT", { "default": 0.0, "step": 0.1 }), + }, + "required": { + "value": ("STRING", { "multiline": False, "default": "" }), + }, + } + + RETURN_TYPES = ("INT", "FLOAT", ) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, value, a = 0.0, b = 0.0): + def eval_(node): + if isinstance(node, ast.Num): # number + return node.n + elif isinstance(node, ast.Name): # variable + if node.id == "a": + return a + if node.id == "b": + return b + elif isinstance(node, ast.BinOp): # + return operators[type(node.op)](eval_(node.left), eval_(node.right)) + elif isinstance(node, ast.UnaryOp): # e.g., -1 + return operators[type(node.op)](eval_(node.operand)) + elif isinstance(node, ast.Call): # custom function + if node.func.id in op_functions: + args =[eval_(arg) for arg in node.args] + return op_functions[node.func.id](*args) + else: + return 0 + + result = eval_(ast.parse(value, mode='eval').body) + + if math.isnan(result): + result = 0.0 + + return (round(result), result, ) + +class ModelCompile(): + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": ("MODEL",), + "fullgraph": ("BOOLEAN", { "default": False }), + "dynamic": ("BOOLEAN", { "default": False }), + "mode": (["default", "reduce-overhead", "max-autotune", "max-autotune-no-cudagraphs"],), + }, + } + + RETURN_TYPES = ("MODEL", ) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, model, fullgraph, dynamic, mode): + work_model = model.clone() + torch._dynamo.config.suppress_errors = True + work_model.model.diffusion_model = torch.compile(work_model.model.diffusion_model, dynamic=dynamic, fullgraph=fullgraph, mode=mode) + return( work_model, ) + +class ConsoleDebug: + def __init__(self): + pass + + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "value": (any, {}), + }, + "optional": { + "prefix": ("STRING", { "multiline": False, "default": "Value:" }) + } + } + + RETURN_TYPES = () + FUNCTION = "execute" + CATEGORY = "essentials" + OUTPUT_NODE = True + + def execute(self, value, prefix): + print(f"\033[96m{prefix} {value}\033[0m") + + return (None,) + +class DebugTensorShape: + def __init__(self): + pass + + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "tensor": (any, {}), + }, + } + + RETURN_TYPES = () + FUNCTION = "execute" + CATEGORY = "essentials" + OUTPUT_NODE = True + + def execute(self, tensor): + shapes = [] + def tensorShape(tensor): + if isinstance(tensor, dict): + for k in tensor: + tensorShape(tensor[k]) + elif isinstance(tensor, list): + for i in range(len(tensor)): + tensorShape(tensor[i]) + elif hasattr(tensor, 'shape'): + shapes.append(list(tensor.shape)) + + tensorShape(tensor) + + print(f"\033[96mShapes found: {shapes}\033[0m") + + return (None,) + +class BatchCount: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "batch": (any, {}), + }, + } + + RETURN_TYPES = ("INT",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, batch): + count = 0 + if hasattr(batch, 'shape'): + count = batch.shape[0] + elif isinstance(batch, dict) and 'samples' in batch: + count = batch['samples'].shape[0] + elif isinstance(batch, list) or isinstance(batch, dict): + count = len(batch) + + return (count, ) + +class ImageSeamCarving: + @classmethod + def INPUT_TYPES(cls): + return { + "required": { + "image": ("IMAGE",), + "width": ("INT", { "default": 512, "min": 1, "max": MAX_RESOLUTION, "step": 1, }), + "height": ("INT", { "default": 512, "min": 1, "max": MAX_RESOLUTION, "step": 1, }), + "energy": (["backward", "forward"],), + "order": (["width-first", "height-first"],), + }, + "optional": { + "keep_mask": ("MASK",), + "drop_mask": ("MASK",), + } + } + + RETURN_TYPES = ("IMAGE",) + CATEGORY = "essentials" + FUNCTION = "execute" + + def execute(self, image, width, height, energy, order, keep_mask=None, drop_mask=None): + try: + from .carve import seam_carving + except ImportError as e: + raise Exception(e) + + img = p(image) + + if keep_mask is not None: + #keep_mask = keep_mask.reshape((-1, 1, keep_mask.shape[-2], keep_mask.shape[-1])).movedim(1, -1) + keep_mask = p(keep_mask.unsqueeze(-1)) + + if keep_mask.shape[2] != img.shape[2] or keep_mask.shape[3] != img.shape[3]: + keep_mask = F.interpolate(keep_mask, size=(img.shape[2], img.shape[3]), mode="bilinear") + if drop_mask is not None: + drop_mask = p(drop_mask.unsqueeze(-1)) + + if drop_mask.shape[2] != img.shape[2] or drop_mask.shape[3] != img.shape[3]: + drop_mask = F.interpolate(drop_mask, size=(img.shape[2], img.shape[3]), mode="bilinear") + + out = [] + for i in range(img.shape[0]): + resized = seam_carving( + T.ToPILImage()(img[i]), + size=(width, height), + energy_mode=energy, + order=order, + keep_mask=T.ToPILImage()(keep_mask[i]) if keep_mask is not None else None, + drop_mask=T.ToPILImage()(drop_mask[i]) if drop_mask is not None else None, + ) + out.append(T.ToTensor()(resized)) + + out = torch.stack(out) + out = pb(out) + + return(out, ) + +class CLIPTextEncodeSDXLSimplified: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "width": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "height": ("INT", {"default": 1024.0, "min": 0, "max": MAX_RESOLUTION}), + "text": ("STRING", {"multiline": True, "default": ""}), + "clip": ("CLIP", ), + }} + RETURN_TYPES = ("CONDITIONING",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, clip, width, height, text): + crop_w = 0 + crop_h = 0 + width = width*4 + height = height*4 + target_width = width + target_height = height + text_g = text_l = text + + tokens = clip.tokenize(text_g) + tokens["l"] = clip.tokenize(text_l)["l"] + if len(tokens["l"]) != len(tokens["g"]): + empty = clip.tokenize("") + while len(tokens["l"]) < len(tokens["g"]): + tokens["l"] += empty["l"] + while len(tokens["l"]) > len(tokens["g"]): + tokens["g"] += empty["g"] + cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) + return ([[cond, {"pooled_output": pooled, "width": width, "height": height, "crop_w": crop_w, "crop_h": crop_h, "target_width": target_width, "target_height": target_height}]], ) + +class KSamplerVariationsStochastic: + @classmethod + def INPUT_TYPES(s): + return {"required":{ + "model": ("MODEL",), + "latent_image": ("LATENT", ), + "noise_seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 25, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 7.0, "min": 0.0, "max": 100.0, "step":0.1, "round": 0.01}), + "sampler": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "variation_seed": ("INT:seed", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "variation_strength": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0, "step":0.05, "round": 0.01}), + #"variation_sampler": (comfy.samplers.KSampler.SAMPLERS, ), + "cfg_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step":0.05, "round": 0.01}), + }} + + RETURN_TYPES = ("LATENT", ) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, model, latent_image, noise_seed, steps, cfg, sampler, scheduler, positive, negative, variation_seed, variation_strength, cfg_scale, variation_sampler="dpmpp_2m_sde"): + # Stage 1: composition sampler + force_full_denoise = False # return with leftover noise = "enable" + disable_noise = False # add noise = "enable" + + end_at_step = max(int(steps * (1-variation_strength)), 1) + start_at_step = 0 + + work_latent = latent_image.copy() + batch_size = work_latent["samples"].shape[0] + work_latent["samples"] = work_latent["samples"][0].unsqueeze(0) + + stage1 = common_ksampler(model, noise_seed, steps, cfg, sampler, scheduler, positive, negative, work_latent, denoise=1.0, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)[0] + print(stage1) + if batch_size > 1: + stage1["samples"] = stage1["samples"].clone().repeat(batch_size, 1, 1, 1) + + # Stage 2: variation sampler + force_full_denoise = True + disable_noise = True + cfg = max(cfg * cfg_scale, 1.0) + start_at_step = end_at_step + end_at_step = steps + + return common_ksampler(model, variation_seed, steps, cfg, variation_sampler, scheduler, positive, negative, stage1, denoise=1.0, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise) + +# From https://github.com/BlenderNeko/ComfyUI_Noise/ +def slerp(val, low, high): + dims = low.shape + + low = low.reshape(dims[0], -1) + high = high.reshape(dims[0], -1) + + low_norm = low/torch.norm(low, dim=1, keepdim=True) + high_norm = high/torch.norm(high, dim=1, keepdim=True) + + low_norm[low_norm != low_norm] = 0.0 + high_norm[high_norm != high_norm] = 0.0 + + omega = torch.acos((low_norm*high_norm).sum(1)) + so = torch.sin(omega) + res = (torch.sin((1.0-val)*omega)/so).unsqueeze(1)*low + (torch.sin(val*omega)/so).unsqueeze(1) * high + + return res.reshape(dims) + +class KSamplerVariationsWithNoise: + @classmethod + def INPUT_TYPES(s): + return {"required": { + "model": ("MODEL", ), + "latent_image": ("LATENT", ), + "main_seed": ("INT:seed", {"default": 0, "min": 0, "max": 0xffffffffffffffff}), + "steps": ("INT", {"default": 20, "min": 1, "max": 10000}), + "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0, "step":0.1, "round": 0.01}), + "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ), + "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ), + "positive": ("CONDITIONING", ), + "negative": ("CONDITIONING", ), + "variation_strength": ("FLOAT", {"default": 0.2, "min": 0.0, "max": 1.0, "step":0.01, "round": 0.01}), + #"start_at_step": ("INT", {"default": 0, "min": 0, "max": 10000}), + #"end_at_step": ("INT", {"default": 10000, "min": 0, "max": 10000}), + #"return_with_leftover_noise": (["disable", "enable"], ), + "variation_seed": ("INT:seed", {"default": random.randint(0, 0xffffffffffffffff), "min": 0, "max": 0xffffffffffffffff}), + }} + + RETURN_TYPES = ("LATENT",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, model, latent_image, main_seed, steps, cfg, sampler_name, scheduler, positive, negative, variation_strength, variation_seed): + generator = torch.manual_seed(main_seed) + batch_size, _, height, width = latent_image["samples"].shape + base_noise = torch.randn((1, 4, height, width), dtype=torch.float32, device="cpu", generator=generator).repeat(batch_size, 1, 1, 1).cpu() + + generator = torch.manual_seed(variation_seed) + variation_noise = torch.randn((batch_size, 4, height, width), dtype=torch.float32, device="cpu", generator=generator).cpu() + + slerp_noise = slerp(variation_strength, base_noise, variation_noise) + + device = comfy.model_management.get_torch_device() + end_at_step = steps #min(steps, end_at_step) + start_at_step = 0 #min(start_at_step, end_at_step) + real_model = None + comfy.model_management.load_model_gpu(model) + real_model = model.model + sampler = comfy.samplers.KSampler(real_model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=1.0, model_options=model.model_options) + sigmas = sampler.sigmas + sigma = sigmas[start_at_step] - sigmas[end_at_step] + sigma /= model.model.latent_format.scale_factor + sigma = sigma.cpu().numpy() + + work_latent = latent_image.copy() + work_latent["samples"] = latent_image["samples"].clone() + slerp_noise * sigma + + force_full_denoise = True + #if return_with_leftover_noise == "enable": + # force_full_denoise = False + + disable_noise = True + + return common_ksampler(model, main_seed, steps, cfg, sampler_name, scheduler, positive, negative, work_latent, denoise=1.0, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise) + +class SDXLEmptyLatentSizePicker: + def __init__(self): + self.device = comfy.model_management.intermediate_device() + + @classmethod + def INPUT_TYPES(s): + return {"required": { + "resolution": (["704x1408 (0.5)","704x1344 (0.52)","768x1344 (0.57)","768x1280 (0.6)","832x1216 (0.68)","832x1152 (0.72)","896x1152 (0.78)","896x1088 (0.82)","960x1088 (0.88)","960x1024 (0.94)","1024x1024 (1.0)","1024x960 (1.07)","1088x960 (1.13)","1088x896 (1.21)","1152x896 (1.29)","1152x832 (1.38)","1216x832 (1.46)","1280x768 (1.67)","1344x768 (1.75)","1344x704 (1.91)","1408x704 (2.0)","1472x704 (2.09)","1536x640 (2.4)","1600x640 (2.5)","1664x576 (2.89)","1728x576 (3.0)",], {"default": "1024x1024 (1.0)"}), + "batch_size": ("INT", {"default": 1, "min": 1, "max": 4096}), + }} + + RETURN_TYPES = ("LATENT","INT","INT",) + RETURN_NAMES = ("LATENT","width", "height",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, resolution, batch_size): + width, height = resolution.split(" ")[0].split("x") + width = int(width) + height = int(height) + + latent = torch.zeros([batch_size, 4, height // 8, width // 8], device=self.device) + + return ({"samples":latent}, width, height,) + +LUTS_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "luts") +# From https://github.com/yoonsikp/pycubelut/blob/master/pycubelut.py (MIT license) +class ImageApplyLUT: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "lut_file": ([f for f in os.listdir(LUTS_DIR) if f.endswith('.cube')], ), + "log_colorspace": ("BOOLEAN", { "default": False }), + "clip_values": ("BOOLEAN", { "default": False }), + "strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.1 }), + }} + + RETURN_TYPES = ("IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + # TODO: check if we can do without numpy + def execute(self, image, lut_file, log_colorspace, clip_values, strength): + from colour.io.luts.iridas_cube import read_LUT_IridasCube + + lut = read_LUT_IridasCube(os.path.join(LUTS_DIR, lut_file)) + lut.name = lut_file + + if clip_values: + if lut.domain[0].max() == lut.domain[0].min() and lut.domain[1].max() == lut.domain[1].min(): + lut.table = np.clip(lut.table, lut.domain[0, 0], lut.domain[1, 0]) + else: + if len(lut.table.shape) == 2: # 3x1D + for dim in range(3): + lut.table[:, dim] = np.clip(lut.table[:, dim], lut.domain[0, dim], lut.domain[1, dim]) + else: # 3D + for dim in range(3): + lut.table[:, :, :, dim] = np.clip(lut.table[:, :, :, dim], lut.domain[0, dim], lut.domain[1, dim]) + + out = [] + for img in image: # TODO: is this more resource efficient? should we use a batch instead? + lut_img = img.numpy().copy() + + is_non_default_domain = not np.array_equal(lut.domain, np.array([[0., 0., 0.], [1., 1., 1.]])) + dom_scale = None + if is_non_default_domain: + dom_scale = lut.domain[1] - lut.domain[0] + lut_img = lut_img * dom_scale + lut.domain[0] + if log_colorspace: + lut_img = lut_img ** (1/2.2) + lut_img = lut.apply(lut_img) + if log_colorspace: + lut_img = lut_img ** (2.2) + if is_non_default_domain: + lut_img = (lut_img - lut.domain[0]) / dom_scale + + lut_img = torch.from_numpy(lut_img) + if strength < 1.0: + lut_img = strength * lut_img + (1 - strength) * img + out.append(lut_img) + + out = torch.stack(out) + + return (out, ) + +FONTS_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "fonts") +class DrawText: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "text": ("STRING", { "multiline": True, "default": "Hello, World!" }), + "font": ([f for f in os.listdir(FONTS_DIR) if f.endswith('.ttf') or f.endswith('.otf')], ), + "size": ("INT", { "default": 56, "min": 1, "max": 9999, "step": 1 }), + "color": ("STRING", { "multiline": False, "default": "#FFFFFF" }), + "background_color": ("STRING", { "multiline": False, "default": "#00000000" }), + "shadow_distance": ("INT", { "default": 0, "min": 0, "max": 100, "step": 1 }), + "shadow_blur": ("INT", { "default": 0, "min": 0, "max": 100, "step": 1 }), + "shadow_color": ("STRING", { "multiline": False, "default": "#000000" }), + "alignment": (["left", "center", "right"],), + "width": ("INT", { "default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 1 }), + "height": ("INT", { "default": 0, "min": 0, "max": MAX_RESOLUTION, "step": 1 }), + }, + } + + RETURN_TYPES = ("IMAGE", "MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, text, font, size, color, background_color, shadow_distance, shadow_blur, shadow_color, alignment, width, height): + font = ImageFont.truetype(os.path.join(FONTS_DIR, font), size) + + lines = text.split("\n") + + # Calculate the width and height of the text + text_width = max(font.getbbox(line)[2] for line in lines) + line_height = font.getmask(text).getbbox()[3] + font.getmetrics()[1] # add descent to height + text_height = line_height * len(lines) + + width = width if width > 0 else text_width + height = height if height > 0 else text_height + + background_color = ImageColor.getrgb(background_color) + image = Image.new('RGBA', (width + shadow_distance, height + shadow_distance), color=background_color) + + image_shadow = None + if shadow_distance > 0: + image_shadow = Image.new('RGBA', (width + shadow_distance, height + shadow_distance), color=background_color) + + for i, line in enumerate(lines): + line_width = font.getbbox(line)[2] + #text_height =font.getbbox(line)[3] + if alignment == "left": + x = 0 + elif alignment == "center": + x = (width - line_width) / 2 + elif alignment == "right": + x = width - line_width + y = i * line_height + + draw = ImageDraw.Draw(image) + draw.text((x, y), line, font=font, fill=color) + + if image_shadow is not None: + draw = ImageDraw.Draw(image_shadow) + draw.text((x + shadow_distance, y + shadow_distance), line, font=font, fill=shadow_color) + + if image_shadow is not None: + image_shadow = image_shadow.filter(ImageFilter.GaussianBlur(shadow_blur)) + image = Image.alpha_composite(image_shadow, image) + + image = pb(T.ToTensor()(image).unsqueeze(0)) + mask = image[:, :, :, 3] if image.shape[3] == 4 else torch.ones_like(image[:, :, :, 0]) + + return (image[:, :, :, :3], mask,) + +class RemBGSession: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "model": (["u2net: general purpose", "u2netp: lightweight general purpose", "u2net_human_seg: human segmentation", "u2net_cloth_seg: cloths Parsing", "silueta: very small u2net", "isnet-general-use: general purpose", "isnet-anime: anime illustrations", "sam: general purpose"],), + "providers": (['CPU', 'CUDA', 'ROCM', 'DirectML', 'OpenVINO', 'CoreML', 'Tensorrt', 'Azure'],), + }, + } + + RETURN_TYPES = ("REMBG_SESSION",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, model, providers): + from rembg import new_session as rembg_new_session + + model = model.split(":")[0] + return (rembg_new_session(model, providers=[providers+"ExecutionProvider"]),) + +class ImageRemoveBackground: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "rembg_session": ("REMBG_SESSION",), + "image": ("IMAGE",), + }, + } + + RETURN_TYPES = ("IMAGE", "MASK",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, rembg_session, image): + from rembg import remove as rembg + + image = p(image) + output = [] + for img in image: + img = T.ToPILImage()(img) + img = rembg(img, session=rembg_session) + output.append(T.ToTensor()(img)) + + output = torch.stack(output, dim=0) + output = pb(output) + mask = output[:, :, :, 3] if output.shape[3] == 4 else torch.ones_like(output[:, :, :, 0]) + + return(output[:, :, :, :3], mask,) + +class NoiseFromImage: + @classmethod + def INPUT_TYPES(s): + return { + "required": { + "image": ("IMAGE",), + "noise_size": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01 }), + "color_noise": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01 }), + "mask_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01 }), + "mask_scale_diff": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01 }), + "noise_strenght": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01 }), + "saturation": ("FLOAT", {"default": 2.0, "min": 0.0, "max": 100.0, "step": 0.1 }), + "contrast": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 100.0, "step": 0.1 }), + "blur": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.1 }), + }, + "optional": { + "noise_mask": ("IMAGE",), + } + } + + RETURN_TYPES = ("IMAGE","IMAGE",) + FUNCTION = "execute" + CATEGORY = "essentials" + + def execute(self, image, noise_size, color_noise, mask_strength, mask_scale_diff, noise_strenght, saturation, contrast, blur, noise_mask=None): + torch.manual_seed(0) + + elastic_alpha = max(image.shape[1], image.shape[2])# * noise_size + elastic_sigma = elastic_alpha / 400 * noise_size + + blur_size = int(6 * blur+1) + if blur_size % 2 == 0: + blur_size+= 1 + + if noise_mask is None: + noise_mask = image + + # Ensure noise mask is the same size as the image + if noise_mask.shape[1:] != image.shape[1:]: + noise_mask = F.interpolate(p(noise_mask), size=(image.shape[1], image.shape[2]), mode='bicubic', align_corners=False) + noise_mask = pb(noise_mask) + # Ensure we have the same number of masks and images + if noise_mask.shape[0] > image.shape[0]: + noise_mask = noise_mask[:image.shape[0]] + else: + noise_mask = torch.cat((noise_mask, noise_mask[-1:].repeat((image.shape[0]-noise_mask.shape[0], 1, 1, 1))), dim=0) + + # Convert image to grayscale mask + noise_mask = noise_mask.mean(dim=3).unsqueeze(-1) + + # add color noise + imgs = p(image.clone()) + if color_noise > 0: + color_noise = torch.normal(torch.zeros_like(imgs), std=color_noise) + + #color_noise = torch.rand_like(imgs) * (color_noise * 2) - color_noise + + color_noise *= (imgs - imgs.min()) / (imgs.max() - imgs.min()) + + imgs = imgs + color_noise + imgs = imgs.clamp(0, 1) + + # create fine noise + fine_noise = [] + for n in imgs: + avg_color = n.mean(dim=[1,2]) + + tmp_noise = T.ElasticTransform(alpha=elastic_alpha, sigma=elastic_sigma, fill=avg_color.tolist())(n) + #tmp_noise = T.functional.adjust_saturation(tmp_noise, 2.0) + tmp_noise = T.GaussianBlur(blur_size, blur)(tmp_noise) + tmp_noise = T.ColorJitter(contrast=(contrast,contrast), saturation=(saturation,saturation))(tmp_noise) + fine_noise.append(tmp_noise) + + #tmp_noise = F.interpolate(tmp_noise, scale_factor=.1, mode='bilinear', align_corners=False) + #tmp_noise = F.interpolate(tmp_noise, size=(tmp_noise.shape[1], tmp_noise.shape[2]), mode='bilinear', align_corners=False) + + #tmp_noise = T.ElasticTransform(alpha=elastic_alpha, sigma=elastic_sigma/3, fill=avg_color.tolist())(n) + #tmp_noise = T.GaussianBlur(blur_size, blur)(tmp_noise) + #tmp_noise = T.functional.adjust_saturation(tmp_noise, saturation) + #tmp_noise = T.ColorJitter(contrast=(contrast,contrast), saturation=(saturation,saturation))(tmp_noise) + #fine_noise.append(tmp_noise) + + imgs = None + del imgs + + fine_noise = torch.stack(fine_noise, dim=0) + fine_noise = pb(fine_noise) + #fine_noise = torch.stack(fine_noise, dim=0) + #fine_noise = pb(fine_noise) + mask_scale_diff = min(mask_scale_diff, 0.99) + if mask_scale_diff > 0: + coarse_noise = F.interpolate(p(fine_noise), scale_factor=1-mask_scale_diff, mode='area') + coarse_noise = F.interpolate(coarse_noise, size=(fine_noise.shape[1], fine_noise.shape[2]), mode='bilinear', align_corners=False) + coarse_noise = pb(coarse_noise) + else: + coarse_noise = fine_noise + + #noise_mask = noise_mask * mask_strength + (1 - mask_strength) + # merge fine and coarse noise + output = (1 - noise_mask) * coarse_noise + noise_mask * fine_noise + #noise_mask = noise_mask * mask_strength + if mask_strength < 1: + noise_mask = noise_mask.pow(mask_strength) + noise_mask = torch.nan_to_num(noise_mask).clamp(0, 1) + output = noise_mask * output + (1 - noise_mask) * image + + # apply noise to image + output = output * noise_strenght + image * (1 - noise_strenght) + output = output.clamp(0, 1) + + return (output,noise_mask.repeat(1,1,1,3),) + +class RemoveLatentMask: + @classmethod + def INPUT_TYPES(s): + return {"required": { "samples": ("LATENT",),}} + RETURN_TYPES = ("LATENT",) + FUNCTION = "execute" + + CATEGORY = "essentials" + + def execute(self, samples): + s = samples.copy() + if "noise_mask" in s: + del s["noise_mask"] + + return (s,) + +NODE_CLASS_MAPPINGS = { + "GetImageSize+": GetImageSize, + + "ImageResize+": ImageResize, + "ImageCrop+": ImageCrop, + "ImageFlip+": ImageFlip, + + "ImageDesaturate+": ImageDesaturate, + "ImagePosterize+": ImagePosterize, + "ImageCASharpening+": ImageCAS, + "ImageSeamCarving+": ImageSeamCarving, + "ImageEnhanceDifference+": ImageEnhanceDifference, + "ImageExpandBatch+": ImageExpandBatch, + "ImageFromBatch+": ImageFromBatch, + "ImageCompositeFromMaskBatch+": ImageCompositeFromMaskBatch, + "ExtractKeyframes+": ExtractKeyframes, + "ImageApplyLUT+": ImageApplyLUT, + + "MaskBlur+": MaskBlur, + "MaskFlip+": MaskFlip, + "MaskPreview+": MaskPreview, + "MaskBatch+": MaskBatch, + "MaskExpandBatch+": MaskExpandBatch, + "TransitionMask+": TransitionMask, + "MaskFromColor+": MaskFromColor, + "MaskFromBatch+": MaskFromBatch, + + "SimpleMath+": SimpleMath, + "ConsoleDebug+": ConsoleDebug, + "DebugTensorShape+": DebugTensorShape, + + "ModelCompile+": ModelCompile, + "BatchCount+": BatchCount, + + "KSamplerVariationsStochastic+": KSamplerVariationsStochastic, + "KSamplerVariationsWithNoise+": KSamplerVariationsWithNoise, + "CLIPTextEncodeSDXL+": CLIPTextEncodeSDXLSimplified, + "SDXLEmptyLatentSizePicker+": SDXLEmptyLatentSizePicker, + + "DrawText+": DrawText, + "RemBGSession+": RemBGSession, + "ImageRemoveBackground+": ImageRemoveBackground, + + "RemoveLatentMask+": RemoveLatentMask, + + #"NoiseFromImage~": NoiseFromImage, +} + +NODE_DISPLAY_NAME_MAPPINGS = { + "GetImageSize+": "🔧 Get Image Size", + "ImageResize+": "🔧 Image Resize", + "ImageCrop+": "🔧 Image Crop", + "ImageFlip+": "🔧 Image Flip", + + "ImageDesaturate+": "🔧 Image Desaturate", + "ImagePosterize+": "🔧 Image Posterize", + "ImageCASharpening+": "🔧 Image Contrast Adaptive Sharpening", + "ImageSeamCarving+": "🔧 Image Seam Carving", + "ImageEnhanceDifference+": "🔧 Image Enhance Difference", + "ImageExpandBatch+": "🔧 Image Expand Batch", + "ImageFromBatch+": "🔧 Image From Batch", + "ImageCompositeFromMaskBatch+": "🔧 Image Composite From Mask Batch", + "ExtractKeyframes+": "🔧 Extract Keyframes (experimental)", + "ImageApplyLUT+": "🔧 Image Apply LUT", + + "MaskBlur+": "🔧 Mask Blur", + "MaskFlip+": "🔧 Mask Flip", + "MaskPreview+": "🔧 Mask Preview", + "MaskBatch+": "🔧 Mask Batch", + "MaskExpandBatch+": "🔧 Mask Expand Batch", + "TransitionMask+": "🔧 Transition Mask", + "MaskFromColor+": "🔧 Mask From Color", + "MaskFromBatch+": "🔧 Mask From Batch", + + "SimpleMath+": "🔧 Simple Math", + "ConsoleDebug+": "🔧 Console Debug", + "DebugTensorShape+": "🔧 Tensor Shape Debug", + + "ModelCompile+": "🔧 Compile Model", + "BatchCount+": "🔧 Batch Count", + + "KSamplerVariationsStochastic+": "🔧 KSampler Stochastic Variations", + "KSamplerVariationsWithNoise+": "🔧 KSampler Variations with Noise Injection", + "CLIPTextEncodeSDXL+": "🔧 SDXLCLIPTextEncode", + "SDXLEmptyLatentSizePicker+": "🔧 SDXL Empty Latent Size Picker", + + "DrawText+": "🔧 Draw Text", + "RemBGSession+": "🔧 RemBG Session", + "ImageRemoveBackground+": "🔧 Image Remove Background", + + "RemoveLatentMask+": "🔧 Remove Latent Mask", + + #"NoiseFromImage~": "🔧 Noise From Image", +} diff --git a/custom_nodes/ComfyUI_essentials/fonts/put_font_files_here.txt b/custom_nodes/ComfyUI_essentials/fonts/put_font_files_here.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI_essentials/luts/put_luts_files_here.txt b/custom_nodes/ComfyUI_essentials/luts/put_luts_files_here.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/custom_nodes/ComfyUI_essentials/requirements.txt b/custom_nodes/ComfyUI_essentials/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..355e4af422f96f5d955847c7f4bea8db62b1b184 --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/requirements.txt @@ -0,0 +1,3 @@ +numba +colour-science +rembg \ No newline at end of file diff --git a/custom_nodes/ComfyUI_essentials/workflow_all_nodes.json b/custom_nodes/ComfyUI_essentials/workflow_all_nodes.json new file mode 100644 index 0000000000000000000000000000000000000000..fab4c98929e12b9f7ac35bd433ffa55af81480b2 --- /dev/null +++ b/custom_nodes/ComfyUI_essentials/workflow_all_nodes.json @@ -0,0 +1,994 @@ +{ + "last_node_id": 42, + "last_link_id": 61, + "nodes": [ + { + "id": 9, + "type": "ConsoleDebug+", + "pos": [ + 720, + 140 + ], + "size": { + "0": 210, + "1": 60 + }, + "flags": {}, + "order": 12, + "mode": 0, + "inputs": [ + { + "name": "value", + "type": "*", + "link": 3 + } + ], + "properties": { + "Node name for S&R": "ConsoleDebug+" + }, + "widgets_values": [ + "Height:" + ] + }, + { + "id": 28, + "type": "PreviewImage", + "pos": [ + 860, + 1180 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 17, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 23 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 12, + "type": "PreviewImage", + "pos": [ + 860, + 580 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 15, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 11 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 14, + "type": "PreviewImage", + "pos": [ + 860, + 880 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 16, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 13 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 18, + "type": "MaskPreview+", + "pos": [ + 2100, + 90 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 20, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 19 + } + ], + "properties": { + "Node name for S&R": "MaskPreview+" + } + }, + { + "id": 1, + "type": "GetImageSize+", + "pos": [ + 450, + 80 + ], + "size": { + "0": 210, + "1": 46 + }, + "flags": {}, + "order": 2, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 1 + } + ], + "outputs": [ + { + "name": "width", + "type": "INT", + "links": [ + 2 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "height", + "type": "INT", + "links": [ + 3 + ], + "shape": 3, + "slot_index": 1 + } + ], + "properties": { + "Node name for S&R": "GetImageSize+" + } + }, + { + "id": 8, + "type": "ConsoleDebug+", + "pos": [ + 720, + 40 + ], + "size": { + "0": 210, + "1": 60 + }, + "flags": {}, + "order": 11, + "mode": 0, + "inputs": [ + { + "name": "value", + "type": "*", + "link": 2 + } + ], + "properties": { + "Node name for S&R": "ConsoleDebug+" + }, + "widgets_values": [ + "Width:" + ] + }, + { + "id": 10, + "type": "PreviewImage", + "pos": [ + 860, + 280 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 13, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 9 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 36, + "type": "SimpleMath+", + "pos": [ + 1650, + 780 + ], + "size": { + "0": 210, + "1": 80 + }, + "flags": {}, + "order": 14, + "mode": 0, + "inputs": [ + { + "name": "a", + "type": "INT,FLOAT", + "link": 44 + }, + { + "name": "b", + "type": "INT,FLOAT", + "link": 45 + } + ], + "outputs": [ + { + "name": "INT", + "type": "INT", + "links": [ + 46 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "FLOAT", + "type": "FLOAT", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "SimpleMath+" + }, + "widgets_values": [ + "a*b" + ] + }, + { + "id": 23, + "type": "ConsoleDebug+", + "pos": [ + 1920, + 780 + ], + "size": { + "0": 210, + "1": 60 + }, + "flags": {}, + "order": 22, + "mode": 0, + "inputs": [ + { + "name": "value", + "type": "*", + "link": 46 + } + ], + "properties": { + "Node name for S&R": "ConsoleDebug+" + }, + "widgets_values": [ + "Value:" + ] + }, + { + "id": 2, + "type": "ImageResize+", + "pos": [ + 430, + 340 + ], + "size": { + "0": 310, + "1": 170 + }, + "flags": {}, + "order": 3, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 4 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 9 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "width", + "type": "INT", + "links": [ + 44 + ], + "shape": 3, + "slot_index": 1 + }, + { + "name": "height", + "type": "INT", + "links": [ + 45 + ], + "shape": 3, + "slot_index": 2 + } + ], + "properties": { + "Node name for S&R": "ImageResize+" + }, + "widgets_values": [ + 256, + 64, + "lanczos", + true + ] + }, + { + "id": 4, + "type": "ImageFlip+", + "pos": [ + 430, + 800 + ], + "size": { + "0": 310, + "1": 60 + }, + "flags": {}, + "order": 4, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 6 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 11 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImageFlip+" + }, + "widgets_values": [ + "xy" + ] + }, + { + "id": 6, + "type": "ImagePosterize+", + "pos": [ + 430, + 1000 + ], + "size": { + "0": 310, + "1": 60 + }, + "flags": {}, + "order": 5, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 8 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 13 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImagePosterize+" + }, + "widgets_values": [ + 0.5 + ] + }, + { + "id": 27, + "type": "ImageCASharpening+", + "pos": [ + 430, + 1110 + ], + "size": { + "0": 310.79998779296875, + "1": 60 + }, + "flags": {}, + "order": 6, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 22 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 23 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImageCASharpening+" + }, + "widgets_values": [ + 0.8 + ] + }, + { + "id": 15, + "type": "MaskBlur+", + "pos": [ + 1690, + 130 + ], + "size": { + "0": 310, + "1": 82 + }, + "flags": {}, + "order": 9, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 14 + } + ], + "outputs": [ + { + "name": "MASK", + "type": "MASK", + "links": [ + 19 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "MaskBlur+" + }, + "widgets_values": [ + 45, + 28.5 + ] + }, + { + "id": 16, + "type": "MaskFlip+", + "pos": [ + 1690, + 270 + ], + "size": { + "0": 310, + "1": 60 + }, + "flags": {}, + "order": 10, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 15 + } + ], + "outputs": [ + { + "name": "MASK", + "type": "MASK", + "links": [ + 18 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "MaskFlip+" + }, + "widgets_values": [ + "xy" + ] + }, + { + "id": 13, + "type": "PreviewImage", + "pos": [ + 1100, + 760 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 18, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 49 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 37, + "type": "ImageDesaturate+", + "pos": [ + 500, + 920 + ], + "size": { + "0": 190, + "1": 30 + }, + "flags": {}, + "order": 7, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 48 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 49 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "ImageDesaturate+" + } + }, + { + "id": 7, + "type": "LoadImage", + "pos": [ + -90, + 650 + ], + "size": { + "0": 315, + "1": 314 + }, + "flags": {}, + "order": 0, + "mode": 0, + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 1, + 4, + 6, + 8, + 22, + 48, + 57 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "MASK", + "type": "MASK", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "LoadImage" + }, + "widgets_values": [ + "venere.jpg", + "image" + ] + }, + { + "id": 11, + "type": "PreviewImage", + "pos": [ + 1100, + 450 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 19, + "mode": 0, + "inputs": [ + { + "name": "images", + "type": "IMAGE", + "link": 58 + } + ], + "properties": { + "Node name for S&R": "PreviewImage" + } + }, + { + "id": 40, + "type": "ImageCrop+", + "pos": [ + 430, + 560 + ], + "size": { + "0": 310, + "1": 194 + }, + "flags": {}, + "order": 8, + "mode": 0, + "inputs": [ + { + "name": "image", + "type": "IMAGE", + "link": 57 + } + ], + "outputs": [ + { + "name": "IMAGE", + "type": "IMAGE", + "links": [ + 58 + ], + "shape": 3, + "slot_index": 0 + }, + { + "name": "x", + "type": "INT", + "links": null, + "shape": 3 + }, + { + "name": "y", + "type": "INT", + "links": null, + "shape": 3 + } + ], + "properties": { + "Node name for S&R": "ImageCrop+" + }, + "widgets_values": [ + 256, + 256, + "center", + 0, + 0 + ] + }, + { + "id": 20, + "type": "LoadImageMask", + "pos": [ + 1400, + 260 + ], + "size": { + "0": 220.70516967773438, + "1": 318 + }, + "flags": {}, + "order": 1, + "mode": 0, + "outputs": [ + { + "name": "MASK", + "type": "MASK", + "links": [ + 14, + 15 + ], + "shape": 3, + "slot_index": 0 + } + ], + "properties": { + "Node name for S&R": "LoadImageMask" + }, + "widgets_values": [ + "cwf_inpaint_example_mask.png", + "alpha", + "image" + ] + }, + { + "id": 21, + "type": "MaskPreview+", + "pos": [ + 2100, + 380 + ], + "size": { + "0": 210, + "1": 246 + }, + "flags": {}, + "order": 21, + "mode": 0, + "inputs": [ + { + "name": "mask", + "type": "MASK", + "link": 18 + } + ], + "properties": { + "Node name for S&R": "MaskPreview+" + } + } + ], + "links": [ + [ + 1, + 7, + 0, + 1, + 0, + "IMAGE" + ], + [ + 2, + 1, + 0, + 8, + 0, + "*" + ], + [ + 3, + 1, + 1, + 9, + 0, + "*" + ], + [ + 4, + 7, + 0, + 2, + 0, + "IMAGE" + ], + [ + 6, + 7, + 0, + 4, + 0, + "IMAGE" + ], + [ + 8, + 7, + 0, + 6, + 0, + "IMAGE" + ], + [ + 9, + 2, + 0, + 10, + 0, + "IMAGE" + ], + [ + 11, + 4, + 0, + 12, + 0, + "IMAGE" + ], + [ + 13, + 6, + 0, + 14, + 0, + "IMAGE" + ], + [ + 14, + 20, + 0, + 15, + 0, + "MASK" + ], + [ + 15, + 20, + 0, + 16, + 0, + "MASK" + ], + [ + 18, + 16, + 0, + 21, + 0, + "MASK" + ], + [ + 19, + 15, + 0, + 18, + 0, + "MASK" + ], + [ + 22, + 7, + 0, + 27, + 0, + "IMAGE" + ], + [ + 23, + 27, + 0, + 28, + 0, + "IMAGE" + ], + [ + 44, + 2, + 1, + 36, + 0, + "INT,FLOAT" + ], + [ + 45, + 2, + 2, + 36, + 1, + "INT,FLOAT" + ], + [ + 46, + 36, + 0, + 23, + 0, + "*" + ], + [ + 48, + 7, + 0, + 37, + 0, + "IMAGE" + ], + [ + 49, + 37, + 0, + 13, + 0, + "IMAGE" + ], + [ + 57, + 7, + 0, + 40, + 0, + "IMAGE" + ], + [ + 58, + 40, + 0, + 11, + 0, + "IMAGE" + ] + ], + "groups": [], + "config": {}, + "extra": {}, + "version": 0.4 +} \ No newline at end of file diff --git a/models/checkpoints/realisticVision_v51.safetensors b/models/checkpoints/realisticVision_v51.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..be041649019837e489d917b5371c46a6ba343ebf --- /dev/null +++ b/models/checkpoints/realisticVision_v51.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15012c538f503ce2ebfc2c8547b268c75ccdaff7a281db55399940ff1d70e21d +size 2132625894 diff --git a/models/embeddings/altman.pt b/models/embeddings/altman.pt index 0ba733a09767c48cc7dd60ae712945a1f7abf3a5..797e274802eb3aae55dd580d0ca42a670bb37732 100644 Binary files a/models/embeddings/altman.pt and b/models/embeddings/altman.pt differ diff --git a/models/embeddings/andrew_ng.pt b/models/embeddings/andrew_ng.pt index b742d400a68264360d0d733951916a2b0ab53bbe..8e05fa9cf97018df89e8ba9188c01eb15b7e9ac5 100644 Binary files a/models/embeddings/andrew_ng.pt and b/models/embeddings/andrew_ng.pt differ diff --git a/models/embeddings/bengio.pt b/models/embeddings/bengio.pt index bbe7e671f91da63ab21e109cfb946ce599315db3..f649a7e203703dd4ff2363c1f9dfc3917c70f5fa 100644 Binary files a/models/embeddings/bengio.pt and b/models/embeddings/bengio.pt differ diff --git a/models/embeddings/beyonce.pt b/models/embeddings/beyonce.pt index 7bd27479da8e461d15a4218a0a03eb3cf59e565c..66388648e1477481031b1e886d3ddd879c8b61f3 100644 Binary files a/models/embeddings/beyonce.pt and b/models/embeddings/beyonce.pt differ diff --git a/models/embeddings/biden.pt b/models/embeddings/biden.pt index d0a8f5e768123adece29e875a32394f7794bcfe9..9d9137c2ecced1a61a95ea730b3e2863015b12e3 100644 Binary files a/models/embeddings/biden.pt and b/models/embeddings/biden.pt differ diff --git a/models/embeddings/eli.pt b/models/embeddings/eli.pt index 9a4ee8c5b89b162132dd6ef7950b69e9e4b650c2..7e9f602b27b7d3979a378258ad674d83a34c0f9c 100644 Binary files a/models/embeddings/eli.pt and b/models/embeddings/eli.pt differ diff --git a/models/embeddings/emotion-angry.pt b/models/embeddings/emotion-angry.pt index 90357667bbdb028323b05b7da293d8e9c68fcc80..67ccb4e78e23167c616a8596d9bcb92a24a38f49 100644 Binary files a/models/embeddings/emotion-angry.pt and b/models/embeddings/emotion-angry.pt differ diff --git a/models/embeddings/emotion-defiance.pt b/models/embeddings/emotion-defiance.pt index fb80fad2c3fb508e3d5cb0380dad6738fd8802dd..52298c955fa613f77149fd64f1343c467e6f3496 100644 Binary files a/models/embeddings/emotion-defiance.pt and b/models/embeddings/emotion-defiance.pt differ diff --git a/models/embeddings/emotion-grin.pt b/models/embeddings/emotion-grin.pt index 070465601d4c524370398d5337c78697c3ebd5d8..2c3045e86551b8b1de64eefe06478516568c9327 100644 Binary files a/models/embeddings/emotion-grin.pt and b/models/embeddings/emotion-grin.pt differ diff --git a/models/embeddings/emotion-happy.pt b/models/embeddings/emotion-happy.pt index 53220f4aac28ec033f551706e6af74e34da51ea4..77e341b06a221a19690a6d5eecbd824b34fa4351 100644 Binary files a/models/embeddings/emotion-happy.pt and b/models/embeddings/emotion-happy.pt differ diff --git a/models/embeddings/emotion-laugh.pt b/models/embeddings/emotion-laugh.pt index fa7983950d8c1b2bdfdc8d64acecec11b49b234e..6190ac7a7af2aa04556215d3897a775f782486fc 100644 Binary files a/models/embeddings/emotion-laugh.pt and b/models/embeddings/emotion-laugh.pt differ diff --git a/models/embeddings/emotion-sad.pt b/models/embeddings/emotion-sad.pt index 0edb58e79320659bd56b1b77d5d280dda0c49601..cefaff30a81edf5a9fdf7cfc495dd450e8c94a2e 100644 Binary files a/models/embeddings/emotion-sad.pt and b/models/embeddings/emotion-sad.pt differ diff --git a/models/embeddings/emotion-shock.pt b/models/embeddings/emotion-shock.pt index 3948bbc9573054426a6657f6ac1eea318b3ee87c..1edf7f5396b149bb36ea01d703bdd04261be6f6b 100644 Binary files a/models/embeddings/emotion-shock.pt and b/models/embeddings/emotion-shock.pt differ diff --git a/models/embeddings/emotion-smile.pt b/models/embeddings/emotion-smile.pt index 7672d1e01fdf3ecc2c55543cd44f2d1844f16993..a8bff6806985d41630e67ab97dd6f658d916848d 100644 Binary files a/models/embeddings/emotion-smile.pt and b/models/embeddings/emotion-smile.pt differ diff --git a/models/embeddings/harry.pt b/models/embeddings/harry.pt index 25379f58fc61c85f0e69e1d46cbb77e4b45b6f1e..71a1d4b65db09f1da3c5417162cdcaf209ef2d4a 100644 Binary files a/models/embeddings/harry.pt and b/models/embeddings/harry.pt differ diff --git a/models/embeddings/hermione.pt b/models/embeddings/hermione.pt index b762e88a5d1c44e0b6ece451b53baec68928df6c..2ca1e1d48b4e1b4c61e5e4b741d8f043f6ebaf00 100644 Binary files a/models/embeddings/hermione.pt and b/models/embeddings/hermione.pt differ diff --git a/models/embeddings/hinton.pt b/models/embeddings/hinton.pt index 8d4a7032d05b795f740d42a61f4bc8aa1a4b104d..6aa61486e80f116394b89ad9c88cbf33f48d4ede 100644 Binary files a/models/embeddings/hinton.pt and b/models/embeddings/hinton.pt differ diff --git a/models/embeddings/huang.pt b/models/embeddings/huang.pt index 0436539482ad13229ebbf7b78ab6ec0fde7b55ef..ffcc52d804991b975ec1790513dd4ce45bfc5553 100644 Binary files a/models/embeddings/huang.pt and b/models/embeddings/huang.pt differ diff --git a/models/embeddings/ironman.pt b/models/embeddings/ironman.pt index 72424445aab1ae9da26b6342d786f86b57a27b8f..32197acba69fdb6f3766dbda945ba86b96578fbd 100644 Binary files a/models/embeddings/ironman.pt and b/models/embeddings/ironman.pt differ diff --git a/models/embeddings/jack_chen.pt b/models/embeddings/jack_chen.pt index e832f916dcff201004fdb54e5a0f48af4f5286d6..c99a59740e3770256797fa8fa9b59d00b15be822 100644 Binary files a/models/embeddings/jack_chen.pt and b/models/embeddings/jack_chen.pt differ diff --git a/models/embeddings/johnson.pt b/models/embeddings/johnson.pt index 3aaa5c11d205cb8bf30a1dd4f4fafb03409d5c92..804e219b78d678c8ae1ab201b965b819c525731e 100644 Binary files a/models/embeddings/johnson.pt and b/models/embeddings/johnson.pt differ diff --git a/models/embeddings/lecun.pt b/models/embeddings/lecun.pt index 4ce5b46de5f3689f5e39b9eadf2e5a363dbdac84..c62790f3453ab20186a76df38e75aee13c88179c 100644 Binary files a/models/embeddings/lecun.pt and b/models/embeddings/lecun.pt differ diff --git a/models/embeddings/lifeifei.pt b/models/embeddings/lifeifei.pt index 4a4bb66edc6dfe9e972c783043692cd7acff3c7b..dc84740f0dee898f04097fc0d77739b119c0506a 100644 Binary files a/models/embeddings/lifeifei.pt and b/models/embeddings/lifeifei.pt differ diff --git a/models/embeddings/lisa.pt b/models/embeddings/lisa.pt index 54900bc5e9ada3cc0bef0affa39517b3334ab33b..0e1e0a571958c41099eb5d3c6a7e0783ec0cf480 100644 Binary files a/models/embeddings/lisa.pt and b/models/embeddings/lisa.pt differ diff --git a/models/embeddings/mona.pt b/models/embeddings/mona.pt index a3765c35a2454c508e52e1d58ade97be5f1d834d..2f625353ae3e6e0c57fa3171b0a5b445ebe3c714 100644 Binary files a/models/embeddings/mona.pt and b/models/embeddings/mona.pt differ diff --git a/models/embeddings/monroe.pt b/models/embeddings/monroe.pt index d2e356dae69b94f0ef9730b8a2afe71e42722fe4..1aee832bca99f5fdea541d970fb861a68cf854dd 100644 Binary files a/models/embeddings/monroe.pt and b/models/embeddings/monroe.pt differ diff --git a/models/embeddings/musk.pt b/models/embeddings/musk.pt index 0a41f70e861a5e6c02062f5032c7b73a544aa484..1e155370ef08dc528bc193141bf176a6fd744250 100644 Binary files a/models/embeddings/musk.pt and b/models/embeddings/musk.pt differ diff --git a/models/embeddings/obama.pt b/models/embeddings/obama.pt index db5a2d99c005652c2ae40c599ea2f0940b91e8e5..153132f69fc8ba56c8741bc2fef56ded93673075 100644 Binary files a/models/embeddings/obama.pt and b/models/embeddings/obama.pt differ diff --git a/models/embeddings/scarlett.pt b/models/embeddings/scarlett.pt index a35c4706215ec62fa367e08e2cff28c36bc0d342..36410f1fcdfdab3d689b3807f44f8c26b7a7d49a 100644 Binary files a/models/embeddings/scarlett.pt and b/models/embeddings/scarlett.pt differ diff --git a/models/embeddings/taylor.pt b/models/embeddings/taylor.pt index 15f47889d72e2dc42f19d50aeb0f9e3c291ffcaa..136e40eb9633798ba579b7766a79ef93ec7bfdec 100644 Binary files a/models/embeddings/taylor.pt and b/models/embeddings/taylor.pt differ diff --git a/models/embeddings/trump.pt b/models/embeddings/trump.pt index 0df8cbe4860475c57386ac87c939b875559e9d74..281a8cd04adb3e96677dbae28cc3357aa4000673 100644 Binary files a/models/embeddings/trump.pt and b/models/embeddings/trump.pt differ diff --git a/models/embeddings/zuck.pt b/models/embeddings/zuck.pt index e367b0b1a0484c2a2080f44193b02a1f87355acf..4058ffa9d1e5143a27942094420f14ee2a4cb1ea 100644 Binary files a/models/embeddings/zuck.pt and b/models/embeddings/zuck.pt differ diff --git a/models/sams/sam_vit_b_01ec64.pth b/models/sams/sam_vit_b_01ec64.pth new file mode 100644 index 0000000000000000000000000000000000000000..ab7d111e57bd052a76fe669986560e3555e9c8f6 --- /dev/null +++ b/models/sams/sam_vit_b_01ec64.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec2df62732614e57411cdcf32a23ffdf28910380d03139ee0f4fcbe91eb8c912 +size 375042383 diff --git a/models/ultralytics/bbox/face_yolov8m.pt b/models/ultralytics/bbox/face_yolov8m.pt new file mode 100644 index 0000000000000000000000000000000000000000..dbfa5813f1ecf8c0b80c12fc5951a706afdeaf30 --- /dev/null +++ b/models/ultralytics/bbox/face_yolov8m.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3893a92c5c1907136b6cc75404094db767c1e0cfefe1b43e87dad72af2e4c9f +size 51996128 diff --git a/models/ultralytics/bbox/hand_yolov8s.pt b/models/ultralytics/bbox/hand_yolov8s.pt new file mode 100644 index 0000000000000000000000000000000000000000..0248de16969bce69b7bdf05b6e67373bcb634427 --- /dev/null +++ b/models/ultralytics/bbox/hand_yolov8s.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30878cea9870964d4a238339e9dcff002078bbbaa1a058b07e11c167f67eca1c +size 22484536 diff --git a/models/ultralytics/segm/person_yolov8m-seg.pt b/models/ultralytics/segm/person_yolov8m-seg.pt new file mode 100644 index 0000000000000000000000000000000000000000..a73627da3336d3910b69f3ed83b65f416a705c5d --- /dev/null +++ b/models/ultralytics/segm/person_yolov8m-seg.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fd7e562f240a5debd48bf737753de6fb60c63f8664121bb522f090a885d8254 +size 54791722 diff --git a/models/upscale_models/4xUltrasharpV10.pt b/models/upscale_models/4xUltrasharpV10.pt new file mode 100644 index 0000000000000000000000000000000000000000..9f3bb839bebd6cd26c94122b7651261d0b346a50 --- /dev/null +++ b/models/upscale_models/4xUltrasharpV10.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5812231fc936b42af08a5edba784195495d303d5b3248c24489ef0c4021fe01 +size 66961958 diff --git a/models/vae/vae-ft-mse-840000-ema-pruned.safetensors b/models/vae/vae-ft-mse-840000-ema-pruned.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..14a39ba28ca5d7ffb8efcf9a24ce5fb31120200b --- /dev/null +++ b/models/vae/vae-ft-mse-840000-ema-pruned.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:735e4c3a447a3255760d7f86845f09f937809baa529c17370d83e4c3758f3c75 +size 334641190 diff --git a/user/default/comfy.settings.json b/user/default/comfy.settings.json new file mode 100644 index 0000000000000000000000000000000000000000..9e26dfeeb6e641a33dae4961196235bdb965b21b --- /dev/null +++ b/user/default/comfy.settings.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/user/default/comfy.templates.json b/user/default/comfy.templates.json new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/web/extensions/FizzleDorf/Folder here to satisfy init, eventually I'll have stuff in here..txt b/web/extensions/FizzleDorf/Folder here to satisfy init, eventually I'll have stuff in here..txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391