File size: 6,303 Bytes
6fecfbe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
# vyro-workflows
Based on comfy hash 2ef459b
## To use:
The `unified-api-workflow.json` contains the following keys:
```
"keys": {
"input": "16",
"output_default": "57",
"output_face_swap_stage1": "125",
"output_face_swap_stage2": "129",
"output_headshot":"357",
"output_remix_stage1":"255",
"output_remix_stage2":"190",
"output_qr":"301"
}
```
T2I, I2I, Variate modes all use `output_default`.
Face swap uses `output_face_swap_stage1` and `output_face_swap_stage2`, depending on whether the user wants the additional denoising stage for facial accuracy (performance can vary depending on the input image and output style).
Remix uses `output_remix_stage1` and `output_remix_stage2`. Remix uses several different image inputs:
* `init_img` is the image to be remixed. Can be blank in which case image starts from noise.
* `controlnet_input_img` is the image to be used as a controlnet input. Can be blank in which case the first `image_prompt` image is used as the controlnet input.
* `image_prompt` is one or more images to be used as styles to be blended. The `image_prompt_weights` param is a comma seperated of floats that determines the weight of each input image. If blank, all styles are weighted equally at 1.0. Valid values are from 0.0-1.0.
* `face_swap_image` is the image to be used for face swapping. Can be blank in which case no face swap is performed.
Headshot expects `init_img` to be the image to be swapped for the headshot. It ignores the `face_swap_img` param.
QR Code expects the `controlnet_input_img` to be a black & white image.
## Running in Comfy
The workflow will break when using in Comfy unless you disconnect the preview nodes for the modes you're not using.
<img width="818" alt="image" src="https://github.com/Vyro-ai/vyro-workflows/assets/122644869/8727c6c6-8ee1-4740-99b6-5023ffc9afe8">
Disconnect these preview nodes when not using Face Swap.
<img width="651" alt="image" src="https://github.com/Vyro-ai/vyro-workflows/assets/122644869/98d9adb2-b138-4049-872b-118c75bac97a">
Disconnect this preview node when not using T2I/I2I/Variate
To use face_swap or i2i, convert the `init_img` or `face_swap_img` params to inputs in the input node, then connect the image loader/base64 encoder nearby.
`<img width="842" alt="image" src="https://github.com/Vyro-ai/vyro-workflows/assets/122644869/8999d29a-aafb-4c7c-b029-db04b3b3d0be">`
## Integration notes:
Formerly, prompt analysis was done outside of the Comfy workflow. Preferrably now, inputs should be passed directly into the workflow.
init_img and face_swap_img are expected to be base64 encoded strings of PIL-compatible images.
## Requirements
Depends on the `comfyui-reactor-node` package. Please install seperately.
Make sure to install the requirements.txt file in the root of the repo. The cv2 version is very important to not conflict with reactor
### Prompt Analysis Spacy model
`cd ComfyUI/models`
`mkdir spacy`
`cd ../custom_nodes/vyro-workflows`
`unzip spa.zip -d ../../models/spacy`
### Interposer model:
`cd ComfyUI/models`
`mkdir interposers`
```
python3 -c 'from huggingface_hub import hf_hub_download
hf_hub_download("city96/SD-Latent-Interposer", local_dir="./interposers",
filename=f"xl-to-v1_interposer-v1.1.safetensors",
force_download=True, local_dir_use_symlinks=False)'
```
### Requires the following models in the ComfyUI/models/checkpoints folder:
* sd_xl_base_1.0.safetensors
* juggernaut.safetensors
* rev_animated.safetensors
* dreamshaperv8.safetensors
### Requires the following Lora's in the ComfyUI/models/loras folder:
* 3dRenderStyleXL.safetensors
* xl_more_art_full_v1.safetensors
* juggernaut_cinematic_xl.safetensors
### Models for QR CODE (1.5 models only):
* checkpoint: https://civitai.com/models/143386/incredible-world
* lora: https://civitai.com/models/82098/add-more-details-detail-enhancer-tweaker-lora
* controlnet Tile: https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth
* controlnet QR code: https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster/blob/main/control_v1p_sd15_qrcode_monster.ckpt
### Requires the following models in the ComfyUI/models/ipadapter folder:
* IPAdapter+Face SD1.5- https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus-face_sd15.bin
* IPAdapter+ SD1.5 - https://huggingface.co/h94/IP-Adapter/resolve/main/models/ip-adapter-plus_sd15.bin
* IPAdapter SDXL - https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl.bin
* IPAdapter SDXL ViT-H - https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/ip-adapter_sdxl_vit-h.bin
### Requires the following models in the Comfyui/models/clip_vision folder:
* 1_5_clip_vision.safetensors - https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors
### AI Headshots
* epic_realism_inpaint.safetensors - https://civitai.com/models/90018/epicrealism-pureevolution-inpainting
### Outpainting workflow compatible with multiple images
* best model : dreamshaper_8inpainting.safetensors - https://civitai.com/models/4384?modelVersionId=131004
* inpainting controlnet model : https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/blob/main/control_v11p_sd15_inpaint_fp16.safetensors
Recommended parameters:
steps: 50
cfg 10-14
Workflow using vyro 2 input outpaint mode -> outpaint_standalone.json
Example Output :
`<img width="842" alt="image" src="https://i.imgur.com/0jW5EJk.png">`
### Object Remover Inpaint
Workflow in :
`workflows/object_remover_1.json`
Recommended parameters:
* cfg : 6-8
* steps: 30-40
* denoise: 0.4-0.8
Requirements:
- This workflow uses lama, you need to download this lama model:
`https://huggingface.co/hhhzzz/big-lama/resolve/main/big-lama.ckpt`
and put it in `vyro-workflows\nodes\inpaint_utils\models\big-lama.ckpt`
- Based on another repo so you may need these requirements : `https://github.com/hhhzzyang/Comfyui_Lama/blob/main/requirements.txt`
- Used this checkpoint seems to get the best results for realistic outputs:
`https://civitai.com/models/25694?modelVersionId=134361`
Example Output :
<img width="842" alt="image" src="https://i.imgur.com/WdouZE4.png">
<img width="842" alt="image" src="https://i.imgur.com/SvFTzeu.png">
|