vcollos commited on
Commit
2a8eb7b
·
1 Parent(s): 7500263

subindo novo

Browse files
Files changed (3) hide show
  1. README.md +69 -0
  2. app.py +537 -0
  3. requirements.txt +6 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: FLUX LoRA DLC
3
+ emoji: 🥳
4
+ colorFrom: blue
5
+ colorTo: gray
6
+ sdk: gradio
7
+ sdk_version: 4.44.1
8
+ app_file: app.py
9
+ pinned: true
10
+ license: creativeml-openrail-m
11
+ short_description: '[ 200+ Impressive LoRA For Flux ]'
12
+ ---
13
+
14
+
15
+ # List of Flux Dev LoRA Repositories Used as of Now
16
+
17
+ | No. | Repository Name | Link |
18
+ | --- | --------------- | ---- |
19
+ | 1 | Canopus-LoRA-Flux-FaceRealism | [Link](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-FaceRealism) |
20
+ | 2 | softserve_anime | [Link](https://huggingface.co/alvdansen/softserve_anime) |
21
+ | 3 | Canopus-LoRA-Flux-Anime | [Link](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-Anime) |
22
+ | 4 | FLUX.1-dev-LoRA-One-Click-Creative-Template | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-One-Click-Creative-Template) |
23
+ | 5 | Canopus-LoRA-Flux-UltraRealism-2.0 | [Link](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0) |
24
+ | 6 | Flux-Game-Assets-LoRA-v2 | [Link](https://huggingface.co/gokaygokay/Flux-Game-Assets-LoRA-v2) |
25
+ | 7 | softpasty-flux-dev | [Link](https://huggingface.co/alvdansen/softpasty-flux-dev) |
26
+ | 8 | FLUX.1-dev-LoRA-add-details | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-add-details) |
27
+ | 9 | frosting_lane_flux | [Link](https://huggingface.co/alvdansen/frosting_lane_flux) |
28
+ | 10 | flux-ghibsky-illustration | [Link](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration) |
29
+ | 11 | FLUX.1-dev-LoRA-Dark-Fantasy | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Dark-Fantasy) |
30
+ | 12 | Flux_1_Dev_LoRA_Paper-Cutout-Style | [Link](https://huggingface.co/Norod78/Flux_1_Dev_LoRA_Paper-Cutout-Style) |
31
+ | 13 | mooniverse | [Link](https://huggingface.co/alvdansen/mooniverse) |
32
+ | 14 | pola-photo-flux | [Link](https://huggingface.co/alvdansen/pola-photo-flux) |
33
+ | 15 | flux-tarot-v1 | [Link](https://huggingface.co/multimodalart/flux-tarot-v1) |
34
+ | 16 | Flux-Dev-Real-Anime-LoRA | [Link](https://huggingface.co/prithivMLmods/Flux-Dev-Real-Anime-LoRA) |
35
+ | 17 | Flux_Sticker_Lora | [Link](https://huggingface.co/diabolic6045/Flux_Sticker_Lora) |
36
+ | 18 | flux-RealismLora | [Link](https://huggingface.co/XLabs-AI/flux-RealismLora) |
37
+ | 19 | flux-koda | [Link](https://huggingface.co/alvdansen/flux-koda) |
38
+ | 20 | Cine-Aesthetic | [Link](https://huggingface.co/mgwr/Cine-Aesthetic) |
39
+ | 21 | flux_cute3D | [Link](https://huggingface.co/SebastianBodza/flux_cute3D) |
40
+ | 22 | flux_dreamscape | [Link](https://huggingface.co/bingbangboom/flux_dreamscape) |
41
+ | 23 | Canopus-Cute-Kawaii-Flux-LoRA | [Link](https://huggingface.co/prithivMLmods/Canopus-Cute-Kawaii-Flux-LoRA) |
42
+ | 24 | Flux-Pastel-Anime | [Link](https://huggingface.co/Raelina/Flux-Pastel-Anime) |
43
+ | 25 | FLUX.1-dev-LoRA-Vector-Journey | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Vector-Journey) |
44
+ | 26 | flux-miniature-worlds | [Link](https://huggingface.co/bingbangboom/flux-miniature-worlds) |
45
+ | 27 | bingbangboom_flux_surf | [Link](https://huggingface.co/glif-loradex-trainer/bingbangboom_flux_surf) |
46
+ | 28 | Canopus-Snoopy-Charlie-Brown-Flux-LoRA | [Link](https://huggingface.co/prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA) |
47
+ | 29 | sonny-anime-fixed | [Link](https://huggingface.co/alvdansen/sonny-anime-fixed) |
48
+ | 30 | flux-multi-angle | [Link](https://huggingface.co/davisbro/flux-multi-angle) |
49
+
50
+ & More ...
51
+
52
+ # Space Inspired From
53
+
54
+ | No. | Feature/Component | Description |
55
+ | --- | ----------------- | ----------- |
56
+ | 1 | **Title** | Flux LoRA The Explorer |
57
+ | 2 | **Model** | LoRA Fine-tuning with the Flux Model |
58
+ | 3 | **Exploration Mode** | Explore fine-tuned models within the Flux framework |
59
+ | 4 | **Interactivity** | Allows users to experiment with different LoRA models |
60
+ | 5 | **UI** | Clean interface inspired by multimodal designs |
61
+ | 6 | **Main Functionality** | Generate images using custom-trained LoRA models in Flux |
62
+ | 7 | **Usage Options** | Various models for selection and generation within the app |
63
+ | 8 | **Integration** | Hugging Face integration for easy access to pre-trained models |
64
+ | 9 | **Examples** | Provides image samples for user inspiration |
65
+ | 10 | **Customization** | User can modify prompts and parameters to explore model creativity |
66
+ | 11 | **Space URL** | [flux-lora-the-explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) |
67
+
68
+
69
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py ADDED
@@ -0,0 +1,537 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import copy
4
+ import time
5
+ import random
6
+ import logging
7
+ import numpy as np
8
+ from typing import Any, Dict, List, Optional, Union
9
+
10
+ import torch
11
+ from PIL import Image
12
+ import gradio as gr
13
+
14
+
15
+ from diffusers import (
16
+ DiffusionPipeline,
17
+ AutoencoderTiny,
18
+ AutoencoderKL,
19
+ AutoPipelineForImage2Image,
20
+ FluxPipeline,
21
+ FlowMatchEulerDiscreteScheduler)
22
+
23
+ from huggingface_hub import (
24
+ hf_hub_download,
25
+ HfFileSystem,
26
+ ModelCard,
27
+ snapshot_download)
28
+
29
+ from diffusers.utils import load_image
30
+
31
+ import spaces
32
+
33
+ #---if workspace = local or colab---
34
+
35
+ # Authenticate with Hugging Face
36
+ # from huggingface_hub import login
37
+
38
+ # Log in to Hugging Face using the provided token
39
+ # hf_token = 'hf-token-authentication'
40
+ # login(hf_token)
41
+
42
+ def calculate_shift(
43
+ image_seq_len,
44
+ base_seq_len: int = 256,
45
+ max_seq_len: int = 4096,
46
+ base_shift: float = 0.5,
47
+ max_shift: float = 1.16,
48
+ ):
49
+ m = (max_shift - base_shift) / (max_seq_len - base_seq_len)
50
+ b = base_shift - m * base_seq_len
51
+ mu = image_seq_len * m + b
52
+ return mu
53
+
54
+ def retrieve_timesteps(
55
+ scheduler,
56
+ num_inference_steps: Optional[int] = None,
57
+ device: Optional[Union[str, torch.device]] = None,
58
+ timesteps: Optional[List[int]] = None,
59
+ sigmas: Optional[List[float]] = None,
60
+ **kwargs,
61
+ ):
62
+ if timesteps is not None and sigmas is not None:
63
+ raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values")
64
+ if timesteps is not None:
65
+ scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs)
66
+ timesteps = scheduler.timesteps
67
+ num_inference_steps = len(timesteps)
68
+ elif sigmas is not None:
69
+ scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs)
70
+ timesteps = scheduler.timesteps
71
+ num_inference_steps = len(timesteps)
72
+ else:
73
+ scheduler.set_timesteps(num_inference_steps, device=device, **kwargs)
74
+ timesteps = scheduler.timesteps
75
+ return timesteps, num_inference_steps
76
+
77
+ # FLUX pipeline
78
+ @torch.inference_mode()
79
+ def flux_pipe_call_that_returns_an_iterable_of_images(
80
+ self,
81
+ prompt: Union[str, List[str]] = None,
82
+ prompt_2: Optional[Union[str, List[str]]] = None,
83
+ height: Optional[int] = None,
84
+ width: Optional[int] = None,
85
+ num_inference_steps: int = 28,
86
+ timesteps: List[int] = None,
87
+ guidance_scale: float = 3.5,
88
+ num_images_per_prompt: Optional[int] = 1,
89
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
90
+ latents: Optional[torch.FloatTensor] = None,
91
+ prompt_embeds: Optional[torch.FloatTensor] = None,
92
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
93
+ output_type: Optional[str] = "pil",
94
+ return_dict: bool = True,
95
+ joint_attention_kwargs: Optional[Dict[str, Any]] = None,
96
+ max_sequence_length: int = 512,
97
+ good_vae: Optional[Any] = None,
98
+ ):
99
+ height = height or self.default_sample_size * self.vae_scale_factor
100
+ width = width or self.default_sample_size * self.vae_scale_factor
101
+
102
+ self.check_inputs(
103
+ prompt,
104
+ prompt_2,
105
+ height,
106
+ width,
107
+ prompt_embeds=prompt_embeds,
108
+ pooled_prompt_embeds=pooled_prompt_embeds,
109
+ max_sequence_length=max_sequence_length,
110
+ )
111
+
112
+ self._guidance_scale = guidance_scale
113
+ self._joint_attention_kwargs = joint_attention_kwargs
114
+ self._interrupt = False
115
+
116
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
117
+ device = self._execution_device
118
+
119
+ lora_scale = joint_attention_kwargs.get("scale", None) if joint_attention_kwargs is not None else None
120
+ prompt_embeds, pooled_prompt_embeds, text_ids = self.encode_prompt(
121
+ prompt=prompt,
122
+ prompt_2=prompt_2,
123
+ prompt_embeds=prompt_embeds,
124
+ pooled_prompt_embeds=pooled_prompt_embeds,
125
+ device=device,
126
+ num_images_per_prompt=num_images_per_prompt,
127
+ max_sequence_length=max_sequence_length,
128
+ lora_scale=lora_scale,
129
+ )
130
+
131
+ num_channels_latents = self.transformer.config.in_channels // 4
132
+ latents, latent_image_ids = self.prepare_latents(
133
+ batch_size * num_images_per_prompt,
134
+ num_channels_latents,
135
+ height,
136
+ width,
137
+ prompt_embeds.dtype,
138
+ device,
139
+ generator,
140
+ latents,
141
+ )
142
+
143
+ sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
144
+ image_seq_len = latents.shape[1]
145
+ mu = calculate_shift(
146
+ image_seq_len,
147
+ self.scheduler.config.base_image_seq_len,
148
+ self.scheduler.config.max_image_seq_len,
149
+ self.scheduler.config.base_shift,
150
+ self.scheduler.config.max_shift,
151
+ )
152
+ timesteps, num_inference_steps = retrieve_timesteps(
153
+ self.scheduler,
154
+ num_inference_steps,
155
+ device,
156
+ timesteps,
157
+ sigmas,
158
+ mu=mu,
159
+ )
160
+ self._num_timesteps = len(timesteps)
161
+
162
+ guidance = torch.full([1], guidance_scale, device=device, dtype=torch.float32).expand(latents.shape[0]) if self.transformer.config.guidance_embeds else None
163
+
164
+ for i, t in enumerate(timesteps):
165
+ if self.interrupt:
166
+ continue
167
+
168
+ timestep = t.expand(latents.shape[0]).to(latents.dtype)
169
+
170
+ noise_pred = self.transformer(
171
+ hidden_states=latents,
172
+ timestep=timestep / 1000,
173
+ guidance=guidance,
174
+ pooled_projections=pooled_prompt_embeds,
175
+ encoder_hidden_states=prompt_embeds,
176
+ txt_ids=text_ids,
177
+ img_ids=latent_image_ids,
178
+ joint_attention_kwargs=self.joint_attention_kwargs,
179
+ return_dict=False,
180
+ )[0]
181
+
182
+ latents_for_image = self._unpack_latents(latents, height, width, self.vae_scale_factor)
183
+ latents_for_image = (latents_for_image / self.vae.config.scaling_factor) + self.vae.config.shift_factor
184
+ image = self.vae.decode(latents_for_image, return_dict=False)[0]
185
+ yield self.image_processor.postprocess(image, output_type=output_type)[0]
186
+ latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
187
+ torch.cuda.empty_cache()
188
+
189
+ latents = self._unpack_latents(latents, height, width, self.vae_scale_factor)
190
+ latents = (latents / good_vae.config.scaling_factor) + good_vae.config.shift_factor
191
+ image = good_vae.decode(latents, return_dict=False)[0]
192
+ self.maybe_free_model_hooks()
193
+ torch.cuda.empty_cache()
194
+ yield self.image_processor.postprocess(image, output_type=output_type)[0]
195
+
196
+ #------------------------------------------------------------------------------------------------------------------------------------------------------------#
197
+ loras = [
198
+ #Super-Realism
199
+ {
200
+ "image": "https://huggingface.co/Collos/uniodonto/resolve/main/images/jalves.jpeg",
201
+ "title": "jose Alves",
202
+ "repo": "Collos/uniodonto",
203
+ "weights": "lora.safetensors",
204
+ "trigger_word": "José Alves"
205
+ }
206
+
207
+ #add new
208
+ ]
209
+
210
+ #--------------------------------------------------Model Initialization-----------------------------------------------------------------------------------------#
211
+
212
+ dtype = torch.bfloat16
213
+ device = "cuda" if torch.cuda.is_available() else "cpu"
214
+ base_model = "black-forest-labs/FLUX.1-dev"
215
+
216
+ #TAEF1 is very tiny autoencoder which uses the same "latent API" as FLUX.1's VAE. FLUX.1 is useful for real-time previewing of the FLUX.1 generation process.#
217
+ taef1 = AutoencoderTiny.from_pretrained("madebyollin/taef1", torch_dtype=dtype).to(device)
218
+ good_vae = AutoencoderKL.from_pretrained(base_model, subfolder="vae", torch_dtype=dtype).to(device)
219
+ pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=dtype, vae=taef1).to(device)
220
+ pipe_i2i = AutoPipelineForImage2Image.from_pretrained(base_model,
221
+ vae=good_vae,
222
+ transformer=pipe.transformer,
223
+ text_encoder=pipe.text_encoder,
224
+ tokenizer=pipe.tokenizer,
225
+ text_encoder_2=pipe.text_encoder_2,
226
+ tokenizer_2=pipe.tokenizer_2,
227
+ torch_dtype=dtype
228
+ )
229
+
230
+ MAX_SEED = 2**32-1
231
+
232
+ pipe.flux_pipe_call_that_returns_an_iterable_of_images = flux_pipe_call_that_returns_an_iterable_of_images.__get__(pipe)
233
+
234
+ class calculateDuration:
235
+ def __init__(self, activity_name=""):
236
+ self.activity_name = activity_name
237
+
238
+ def __enter__(self):
239
+ self.start_time = time.time()
240
+ return self
241
+
242
+ def __exit__(self, exc_type, exc_value, traceback):
243
+ self.end_time = time.time()
244
+ self.elapsed_time = self.end_time - self.start_time
245
+ if self.activity_name:
246
+ print(f"Elapsed time for {self.activity_name}: {self.elapsed_time:.6f} seconds")
247
+ else:
248
+ print(f"Elapsed time: {self.elapsed_time:.6f} seconds")
249
+
250
+ def update_selection(evt: gr.SelectData, width, height):
251
+ selected_lora = loras[evt.index]
252
+ new_placeholder = f"Type a prompt for {selected_lora['title']}"
253
+ lora_repo = selected_lora["repo"]
254
+ updated_text = f"### Selected: [{lora_repo}](https://huggingface.co/{lora_repo}) ✅"
255
+ if "aspect" in selected_lora:
256
+ if selected_lora["aspect"] == "portrait":
257
+ width = 768
258
+ height = 1024
259
+ elif selected_lora["aspect"] == "landscape":
260
+ width = 1024
261
+ height = 768
262
+ else:
263
+ width = 1024
264
+ height = 1024
265
+ return (
266
+ gr.update(placeholder=new_placeholder),
267
+ updated_text,
268
+ evt.index,
269
+ width,
270
+ height,
271
+ )
272
+
273
+ @spaces.GPU(duration=100)
274
+ def generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, progress):
275
+ pipe.to("cuda")
276
+ generator = torch.Generator(device="cuda").manual_seed(seed)
277
+ with calculateDuration("Generating image"):
278
+ # Generate image
279
+ for img in pipe.flux_pipe_call_that_returns_an_iterable_of_images(
280
+ prompt=prompt_mash,
281
+ num_inference_steps=steps,
282
+ guidance_scale=cfg_scale,
283
+ width=width,
284
+ height=height,
285
+ generator=generator,
286
+ joint_attention_kwargs={"scale": lora_scale},
287
+ output_type="pil",
288
+ good_vae=good_vae,
289
+ ):
290
+ yield img
291
+
292
+ def generate_image_to_image(prompt_mash, image_input_path, image_strength, steps, cfg_scale, width, height, lora_scale, seed):
293
+ generator = torch.Generator(device="cuda").manual_seed(seed)
294
+ pipe_i2i.to("cuda")
295
+ image_input = load_image(image_input_path)
296
+ final_image = pipe_i2i(
297
+ prompt=prompt_mash,
298
+ image=image_input,
299
+ strength=image_strength,
300
+ num_inference_steps=steps,
301
+ guidance_scale=cfg_scale,
302
+ width=width,
303
+ height=height,
304
+ generator=generator,
305
+ joint_attention_kwargs={"scale": lora_scale},
306
+ output_type="pil",
307
+ ).images[0]
308
+ return final_image
309
+
310
+ @spaces.GPU(duration=100)
311
+ def run_lora(prompt, image_input, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale, progress=gr.Progress(track_tqdm=True)):
312
+ if selected_index is None:
313
+ raise gr.Error("Selecione um modelo para continuar.")
314
+ selected_lora = loras[selected_index]
315
+ lora_path = selected_lora["repo"]
316
+ trigger_word = selected_lora["trigger_word"]
317
+ if(trigger_word):
318
+ if "trigger_position" in selected_lora:
319
+ if selected_lora["trigger_position"] == "prepend":
320
+ prompt_mash = f"{trigger_word} {prompt}"
321
+ else:
322
+ prompt_mash = f"{prompt} {trigger_word}"
323
+ else:
324
+ prompt_mash = f"{trigger_word} {prompt}"
325
+ else:
326
+ prompt_mash = prompt
327
+
328
+ with calculateDuration("Carregando Modelo"):
329
+ pipe.unload_lora_weights()
330
+ pipe_i2i.unload_lora_weights()
331
+
332
+ #LoRA weights flow
333
+ with calculateDuration(f"Carregando modelo para {selected_lora['title']}"):
334
+ pipe_to_use = pipe_i2i if image_input is not None else pipe
335
+ weight_name = selected_lora.get("weights", None)
336
+
337
+ pipe_to_use.load_lora_weights(
338
+ lora_path,
339
+ weight_name=weight_name,
340
+ low_cpu_mem_usage=True
341
+ )
342
+
343
+ with calculateDuration("Randomizing seed"):
344
+ if randomize_seed:
345
+ seed = random.randint(0, MAX_SEED)
346
+
347
+ if(image_input is not None):
348
+
349
+ final_image = generate_image_to_image(prompt_mash, image_input, image_strength, steps, cfg_scale, width, height, lora_scale, seed)
350
+ yield final_image, seed, gr.update(visible=False)
351
+ else:
352
+ image_generator = generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, progress)
353
+
354
+ final_image = None
355
+ step_counter = 0
356
+ for image in image_generator:
357
+ step_counter+=1
358
+ final_image = image
359
+ progress_bar = f'<div class="progress-container"><div class="progress-bar" style="--current: {step_counter}; --total: {steps};"></div></div>'
360
+ yield image, seed, gr.update(value=progress_bar, visible=True)
361
+
362
+ yield final_image, seed, gr.update(value=progress_bar, visible=False)
363
+
364
+ def get_huggingface_safetensors(link):
365
+ split_link = link.split("/")
366
+ if(len(split_link) == 2):
367
+ model_card = ModelCard.load(link)
368
+ base_model = model_card.data.get("base_model")
369
+ print(base_model)
370
+
371
+ #Allows Both
372
+ if((base_model != "black-forest-labs/FLUX.1-dev") and (base_model != "black-forest-labs/FLUX.1-schnell")):
373
+ raise Exception("Flux LoRA Not Found!")
374
+
375
+ # Only allow "black-forest-labs/FLUX.1-dev"
376
+ #if base_model != "black-forest-labs/FLUX.1-dev":
377
+ #raise Exception("Only FLUX.1-dev is supported, other LoRA models are not allowed!")
378
+
379
+ image_path = model_card.data.get("widget", [{}])[0].get("output", {}).get("url", None)
380
+ trigger_word = model_card.data.get("instance_prompt", "")
381
+ image_url = f"https://huggingface.co/{link}/resolve/main/{image_path}" if image_path else None
382
+ fs = HfFileSystem()
383
+ try:
384
+ list_of_files = fs.ls(link, detail=False)
385
+ for file in list_of_files:
386
+ if(file.endswith(".safetensors")):
387
+ safetensors_name = file.split("/")[-1]
388
+ if (not image_url and file.lower().endswith((".jpg", ".jpeg", ".png", ".webp"))):
389
+ image_elements = file.split("/")
390
+ image_url = f"https://huggingface.co/{link}/resolve/main/{image_elements[-1]}"
391
+ except Exception as e:
392
+ print(e)
393
+ gr.Warning(f"You didn't include a link neither a valid Hugging Face repository with a *.safetensors LoRA")
394
+ raise Exception(f"You didn't include a link neither a valid Hugging Face repository with a *.safetensors LoRA")
395
+ return split_link[1], link, safetensors_name, trigger_word, image_url
396
+
397
+ def check_custom_model(link):
398
+ if(link.startswith("https://")):
399
+ if(link.startswith("https://huggingface.co") or link.startswith("https://www.huggingface.co")):
400
+ link_split = link.split("huggingface.co/")
401
+ return get_huggingface_safetensors(link_split[1])
402
+ else:
403
+ return get_huggingface_safetensors(link)
404
+
405
+ def add_custom_lora(custom_lora):
406
+ global loras
407
+ if(custom_lora):
408
+ try:
409
+ title, repo, path, trigger_word, image = check_custom_model(custom_lora)
410
+ print(f"Loaded custom LoRA: {repo}")
411
+ card = f'''
412
+ <div class="custom_lora_card">
413
+ <span>Loaded custom LoRA:</span>
414
+ <div class="card_internal">
415
+ <img src="{image}" />
416
+ <div>
417
+ <h3>{title}</h3>
418
+ <small>{"Using: <code><b>"+trigger_word+"</code></b> as the trigger word" if trigger_word else "No trigger word found. If there's a trigger word, include it in your prompt"}<br></small>
419
+ </div>
420
+ </div>
421
+ </div>
422
+ '''
423
+ existing_item_index = next((index for (index, item) in enumerate(loras) if item['repo'] == repo), None)
424
+ if(not existing_item_index):
425
+ new_item = {
426
+ "image": image,
427
+ "title": title,
428
+ "repo": repo,
429
+ "weights": path,
430
+ "trigger_word": trigger_word
431
+ }
432
+ print(new_item)
433
+ existing_item_index = len(loras)
434
+ loras.append(new_item)
435
+
436
+ return gr.update(visible=True, value=card), gr.update(visible=True), gr.Gallery(selected_index=None), f"Custom: {path}", existing_item_index, trigger_word
437
+ except Exception as e:
438
+ gr.Warning(f"Invalid LoRA: either you entered an invalid link, or a non-FLUX LoRA")
439
+ return gr.update(visible=True, value=f"Invalid LoRA: either you entered an invalid link, a non-FLUX LoRA"), gr.update(visible=False), gr.update(), "", None, ""
440
+ else:
441
+ return gr.update(visible=False), gr.update(visible=False), gr.update(), "", None, ""
442
+
443
+ def remove_custom_lora():
444
+ return gr.update(visible=False), gr.update(visible=False), gr.update(), "", None, ""
445
+
446
+ run_lora.zerogpu = True
447
+
448
+ css = '''
449
+ #gen_btn{height: 100%}
450
+ #gen_column{align-self: stretch}
451
+ #title{text-align: center}
452
+ #title h1{font-size: 3em; display:inline-flex; align-items:center}
453
+ #title img{width: 100px; margin-right: 0.5em}
454
+ #gallery .grid-wrap{height: 10vh}
455
+ #lora_list{background: var(--block-background-fill);padding: 0 1em .3em; font-size: 90%}
456
+ .card_internal{display: flex;height: 100px;margin-top: .5em}
457
+ .card_internal img{margin-right: 1em}
458
+ .styler{--form-gap-width: 0px !important}
459
+ #progress{height:30px}
460
+ #progress .generating{display:none}
461
+ .progress-container {width: 100%;height: 30px;background-color: #f0f0f0;border-radius: 15px;overflow: hidden;margin-bottom: 20px}
462
+ .progress-bar {height: 100%;background-color: #4f46e5;width: calc(var(--current) / var(--total) * 100%);transition: width 0.5s ease-in-out}
463
+ '''
464
+
465
+ with gr.Blocks(theme="prithivMLmods/Minecraft-Theme", css=css, delete_cache=(60, 60)) as app:
466
+ title = gr.HTML(
467
+ """<h1>FLUX LoRA DLC🥳</h1>""",
468
+ elem_id="title",
469
+ )
470
+ selected_index = gr.State(None)
471
+ with gr.Row():
472
+ with gr.Column(scale=3):
473
+ prompt = gr.Textbox(label="Prompt", lines=1, placeholder=":/ choose the LoRA and type the prompt ")
474
+ with gr.Column(scale=1, elem_id="gen_column"):
475
+ generate_button = gr.Button("Generate", variant="primary", elem_id="gen_btn")
476
+ with gr.Row():
477
+ with gr.Column():
478
+ selected_info = gr.Markdown("")
479
+ gallery = gr.Gallery(
480
+ [(item["image"], item["title"]) for item in loras],
481
+ label="LoRA DLC's",
482
+ allow_preview=False,
483
+ columns=3,
484
+ elem_id="gallery",
485
+ show_share_button=False
486
+ )
487
+ with gr.Group():
488
+ custom_lora = gr.Textbox(label="Enter Custom LoRA", placeholder="prithivMLmods/Canopus-LoRA-Flux-Anime")
489
+ gr.Markdown("[Check the list of FLUX LoRA's](https://huggingface.co/models?other=base_model:adapter:black-forest-labs/FLUX.1-dev)", elem_id="lora_list")
490
+ custom_lora_info = gr.HTML(visible=False)
491
+ custom_lora_button = gr.Button("Remove custom LoRA", visible=False)
492
+ with gr.Column():
493
+ progress_bar = gr.Markdown(elem_id="progress",visible=False)
494
+ result = gr.Image(label="Generated Image")
495
+
496
+ with gr.Row():
497
+ with gr.Accordion("Advanced Settings", open=False):
498
+ with gr.Row():
499
+ input_image = gr.Image(label="Input image", type="filepath")
500
+ image_strength = gr.Slider(label="Denoise Strength", info="Lower means more image influence", minimum=0.1, maximum=1.0, step=0.01, value=0.75)
501
+ with gr.Column():
502
+ with gr.Row():
503
+ cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, step=0.5, value=3.5)
504
+ steps = gr.Slider(label="Steps", minimum=1, maximum=50, step=1, value=28)
505
+
506
+ with gr.Row():
507
+ width = gr.Slider(label="Width", minimum=256, maximum=1536, step=64, value=1024)
508
+ height = gr.Slider(label="Height", minimum=256, maximum=1536, step=64, value=1024)
509
+
510
+ with gr.Row():
511
+ randomize_seed = gr.Checkbox(True, label="Randomize seed")
512
+ seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0, randomize=True)
513
+ lora_scale = gr.Slider(label="LoRA Scale", minimum=0, maximum=3, step=0.01, value=0.95)
514
+
515
+ gallery.select(
516
+ update_selection,
517
+ inputs=[width, height],
518
+ outputs=[prompt, selected_info, selected_index, width, height]
519
+ )
520
+ custom_lora.input(
521
+ add_custom_lora,
522
+ inputs=[custom_lora],
523
+ outputs=[custom_lora_info, custom_lora_button, gallery, selected_info, selected_index, prompt]
524
+ )
525
+ custom_lora_button.click(
526
+ remove_custom_lora,
527
+ outputs=[custom_lora_info, custom_lora_button, gallery, selected_info, selected_index, custom_lora]
528
+ )
529
+ gr.on(
530
+ triggers=[generate_button.click, prompt.submit],
531
+ fn=run_lora,
532
+ inputs=[prompt, input_image, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale],
533
+ outputs=[result, seed, progress_bar]
534
+ )
535
+
536
+ app.queue()
537
+ app.launch()
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ git+https://github.com/huggingface/diffusers.git
2
+ git+https://github.com/huggingface/transformers.git
3
+ git+https://github.com/huggingface/accelerate.git
4
+ safetensors
5
+ sentencepiece
6
+ git+https://github.com/huggingface/peft.git