vyro-workflows
Based on comfy hash 2ef459b
To use:
The unified-api-workflow.json
contains the following keys:
"keys": {
"input": "16",
"output_default": "57",
"output_face_swap_stage1": "125",
"output_face_swap_stage2": "129",
"output_headshot":"357",
"output_remix_stage1":"255",
"output_remix_stage2":"190",
"output_qr":"301"
}
T2I, I2I, Variate modes all use output_default
.
Face swap uses output_face_swap_stage1
and output_face_swap_stage2
, depending on whether the user wants the additional denoising stage for facial accuracy (performance can vary depending on the input image and output style).
Remix uses output_remix_stage1
and output_remix_stage2
. Remix uses several different image inputs:
init_img
is the image to be remixed. Can be blank in which case image starts from noise.controlnet_input_img
is the image to be used as a controlnet input. Can be blank in which case the firstimage_prompt
image is used as the controlnet input.image_prompt
is one or more images to be used as styles to be blended. Theimage_prompt_weights
param is a comma seperated of floats that determines the weight of each input image. If blank, all styles are weighted equally at 1.0. Valid values are from 0.0-1.0.face_swap_image
is the image to be used for face swapping. Can be blank in which case no face swap is performed.
Headshot expects init_img
to be the image to be swapped for the headshot. It ignores the face_swap_img
param.
QR Code expects the controlnet_input_img
to be a black & white image.
Running in Comfy
The workflow will break when using in Comfy unless you disconnect the preview nodes for the modes you're not using.
Disconnect these preview nodes when not using Face Swap.
Disconnect this preview node when not using T2I/I2I/Variate
To use face_swap or i2i, convert the init_img
or face_swap_img
params to inputs in the input node, then connect the image loader/base64 encoder nearby.
<img width="842" alt="image" src="https://github.com/Vyro-ai/vyro-workflows/assets/122644869/8999d29a-aafb-4c7c-b029-db04b3b3d0be">
Integration notes:
Formerly, prompt analysis was done outside of the Comfy workflow. Preferrably now, inputs should be passed directly into the workflow.
init_img and face_swap_img are expected to be base64 encoded strings of PIL-compatible images.
Requirements
Depends on the comfyui-reactor-node
package. Please install seperately.
Make sure to install the requirements.txt file in the root of the repo. The cv2 version is very important to not conflict with reactor
Prompt Analysis Spacy model
cd ComfyUI/models
mkdir spacy
cd ../custom_nodes/vyro-workflows
unzip spa.zip -d ../../models/spacy
Interposer model:
cd ComfyUI/models
mkdir interposers
python3 -c 'from huggingface_hub import hf_hub_download
hf_hub_download("city96/SD-Latent-Interposer", local_dir="./interposers",
filename=f"xl-to-v1_interposer-v1.1.safetensors",
force_download=True, local_dir_use_symlinks=False)'
Requires the following models in the ComfyUI/models/checkpoints folder:
- sd_xl_base_1.0.safetensors
- juggernaut.safetensors
- rev_animated.safetensors
- dreamshaperv8.safetensors
Requires the following Lora's in the ComfyUI/models/loras folder:
- 3dRenderStyleXL.safetensors
- xl_more_art_full_v1.safetensors
- juggernaut_cinematic_xl.safetensors
Models for QR CODE (1.5 models only):
- checkpoint: https://civitai.com/models/143386/incredible-world
- lora: https://civitai.com/models/82098/add-more-details-detail-enhancer-tweaker-lora
- controlnet Tile: https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth
- controlnet QR code: https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster/blob/main/control_v1p_sd15_qrcode_monster.ckpt
Requires the following models in the ComfyUI/models/ipadapter folder:
- IPAdapter+Face SD1.5- https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus-face_sd15.bin
- IPAdapter+ SD1.5 - https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus_sd15.bin
- IPAdapter SDXL - https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.bin
- IPAdapter SDXL ViT-H - https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl_vit-h.bin
Requires the following models in the Comfyui/models/clip_vision folder:
- 1_5_clip_vision.safetensors - https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors
AI Headshots
- epic_realism_inpaint.safetensors - https://civitai.com/models/90018/epicrealism-pureevolution-inpainting
Outpainting workflow compatible with multiple images
- best model : dreamshaper_8inpainting.safetensors - https://civitai.com/models/4384?modelVersionId=131004
- inpainting controlnet model : https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/blob/main/control_v11p_sd15_inpaint_fp16.safetensors
Recommended parameters: steps: 50 cfg 10-14
Workflow using vyro 2 input outpaint mode -> outpaint_standalone.json
Example Output :
<img width="842" alt="image" src="https://i.imgur.com/0jW5EJk.png">
Object Remover Inpaint
Workflow in :
workflows/object_remover_1.json
Recommended parameters:
- cfg : 6-8
- steps: 30-40
- denoise: 0.4-0.8
Requirements:
- This workflow uses lama, you need to download this lama model:
https://huggingface.co/hhhzzz/big-lama/resolve/main/big-lama.ckpt
and put it invyro-workflows\nodes\inpaint_utils\models\big-lama.ckpt
- Based on another repo so you may need these requirements :
https://github.com/hhhzzyang/Comfyui_Lama/blob/main/requirements.txt
- Used this checkpoint seems to get the best results for realistic outputs:
https://civitai.com/models/25694?modelVersionId=134361
Example Output :