File size: 16,531 Bytes
92cbf13
 
 
 
 
 
e547b24
 
 
 
 
 
 
 
 
92cbf13
e547b24
 
 
92cbf13
 
 
 
 
 
 
 
 
 
e547b24
 
119e558
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e547b24
 
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e547b24
119e558
92cbf13
 
 
119e558
e547b24
92cbf13
 
 
 
 
119e558
92cbf13
 
 
 
 
 
 
 
 
 
119e558
e547b24
92cbf13
 
 
 
 
 
 
 
 
e547b24
119e558
e547b24
92cbf13
 
 
119e558
e547b24
 
92cbf13
 
e547b24
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e547b24
119e558
e547b24
02f8cfa
92cbf13
02f8cfa
 
73f7edc
92cbf13
e547b24
 
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119e558
92cbf13
 
 
 
 
 
 
 
119e558
92cbf13
 
119e558
02f8cfa
119e558
02f8cfa
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e547b24
119e558
02f8cfa
92cbf13
 
119e558
02f8cfa
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e547b24
119e558
92cbf13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
Okay, let's integrate gr.Examples into your script with some diverse, high-quality Stable Diffusion-style prompts and enable caching.

Caching examples means that when the Space first builds (or rebuilds after changes), it will run the query function once for each example and store the resulting image. When a user clicks that example later, the cached image is shown instantly instead of running the model again.

Here's the modified app.py:

import gradio as gr
import requests
import io
import random
import os
import time
from PIL import Image
from deep_translator import GoogleTranslator
import json
from typing import Callable, List, Any, Literal # Added for type hinting Examples

# Project by Nymbo

API_URL = "https://api-inference.huggingface.co/models/black-forest-labs/FLUX.1-dev"
API_TOKEN = os.getenv("HF_READ_TOKEN") # Your main token for primary use
# Consider adding more read tokens if you have them for resilience
ADDITIONAL_TOKENS = [t for t in os.getenv("HF_EXTRA_TOKENS", "").split(',') if t] # Example: HF_EXTRA_TOKENS="token1,token2"
ALL_TOKENS = [API_TOKEN] + ADDITIONAL_TOKENS
if not API_TOKEN:
    print("Warning: HF_READ_TOKEN is not set. API calls may fail.")
    # Optional: raise an error or use a dummy token if needed
    # raise ValueError("HF_READ_TOKEN environment variable is required.")

timeout = 100

# Function to query the API and return the generated image
# Added type hints for clarity
def query(
    prompt: str,
    negative_prompt: str = "(deformed, distorted, disfigured), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, misspellings, typos",
    steps: int = 35,
    cfg_scale: float = 7.0,
    sampler: str = "DPM++ 2M Karras", # Matched default in UI
    seed: int = -1,
    strength: float = 0.7,
    width: int = 1024,
    height: int = 1024
) -> Image.Image | None:

    if not prompt: # Simplified check
        gr.Warning("Prompt cannot be empty.")
        return None
    if not ALL_TOKENS:
        gr.Error("No Hugging Face API tokens available.")
        return None

    key = random.randint(0, 9999)
    start_time = time.time()

    # Rotate through available tokens
    selected_api_token = random.choice(ALL_TOKENS)
    headers = {"Authorization": f"Bearer {selected_api_token}"}

    translated_prompt = prompt
    try:
        # Simple check if likely Russian (Cyrillic) - adjust if needed
        if any('\u0400' <= char <= '\u04FF' for char in prompt):
             translated_prompt = GoogleTranslator(source='ru', target='en').translate(prompt)
             print(f'\033[1mGeneration {key} translation:\033[0m {translated_prompt}')
        else:
             # Assume English or other non-Russian if no Cyrillic detected
             pass
    except Exception as e:
        print(f"Translation failed: {e}. Using original prompt.")
        # Decide if you want to raise an error or just proceed
        # gr.Warning(f"Translation failed: {e}. Using original prompt.")

    # Add some extra flair to the prompt
    enhanced_prompt = f"{translated_prompt} | ultra detail, ultra elaboration, ultra quality, perfect."
    print(f'\033[1mGeneration {key} starting:\033[0m {enhanced_prompt}')

    # Prepare the payload for the API call, including width and height
    payload = {
        "inputs": enhanced_prompt,
        "negative_prompt": negative_prompt, # Use the negative prompt input
        # Note: The underlying API might not support all parameters directly.
        # Check the specific model card for supported parameters.
        # We send common ones; the model API will use what it understands.
        "parameters": {
            "num_inference_steps": steps,
            "guidance_scale": cfg_scale,
            "seed": seed if seed != -1 else random.randint(1, 2**32 - 1), # Use larger seed range
            "strength": strength, # Often used for img2img, may not apply here
            "width": width,
            "height": height,
            # Sampler might not be directly controllable via this endpoint structure
            # The model often uses its default or recommended sampler.
            # Sending it anyway in case future API versions support it.
            "scheduler": sampler,
        }
    }
    # Clean up payload if values are None or default where API expects omission
    if seed == -1:
        del payload["parameters"]["seed"] # Let API choose random if -1
    if not negative_prompt:
         # If empty string, some APIs prefer omitting the key entirely
        del payload["negative_prompt"]


    print(f"Payload for {key}: {json.dumps(payload, indent=2)}")

    # Send the request to the API and handle the response
    try:
        response = requests.post(API_URL, headers=headers, json=payload, timeout=timeout)
        response.raise_for_status() # Raises HTTPError for bad responses (4xx or 5xx)

        # Convert the response content into an image
        image_bytes = response.content
        image = Image.open(io.BytesIO(image_bytes))
        end_time = time.time()
        print(f'\033[1mGeneration {key} completed in {end_time - start_time:.2f}s!\033[0m')
        return image

    except requests.exceptions.Timeout:
        print(f"Error: Request timed out after {timeout} seconds.")
        raise gr.Error(f"Request timed out ({timeout}s). Model might be busy. Try again later.")
    except requests.exceptions.RequestException as e:
        status_code = e.response.status_code if e.response is not None else "N/A"
        error_content = e.response.text if e.response is not None else str(e)
        print(f"Error: API request failed. Status: {status_code}, Content: {error_content}")
        if status_code == 503:
            # Check for specific error messages if available
            if "is currently loading" in error_content:
                 raise gr.Error("Model is loading. Please wait a moment and try again.")
            else:
                 raise gr.Error("Model service unavailable (503). It might be overloaded or down. Try again later.")
        elif status_code == 400:
             raise gr.Error(f"Bad Request (400). Check parameters. API response: {error_content[:200]}")
        elif status_code == 429:
             raise gr.Error("Too many requests (429). Rate limit hit. Please wait.")
        else:
            raise gr.Error(f"API Error ({status_code}). Response: {error_content[:200]}")
    except (OSError, json.JSONDecodeError, IOError) as e:
         # Catch potential issues with image decoding or other file errors
         print(f"Error processing response or image: {e}")
         raise gr.Error(f"Failed to process the image response: {e}")
    except Exception as e: # Catch any other unexpected errors
        print(f"An unexpected error occurred: {e}")
        import traceback
        traceback.print_exc() # Print full traceback to console logs
        raise gr.Error(f"An unexpected error occurred: {e}")


# --- Gradio UI ---

# CSS to style the app
css = """
#app-container {
    max-width: 960px; /* Slightly wider */
    margin-left: auto;
    margin-right: auto;
}
/* Add more styling if desired */
"""

# Default negative prompt can be reused
default_negative_prompt = "(deformed, distorted, disfigured), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, misspellings, typos"

# Define the examples
# Each inner list corresponds to the inputs of the 'query' function IN ORDER:
# [prompt, negative_prompt, steps, cfg_scale, sampler, seed, strength, width, height]
example_list = [
    [
        "Epic cinematic shot of a medieval knight kneeling in a misty forest, volumetric lighting, hyperrealistic photo, 8k",
        default_negative_prompt, 40, 7.5, "DPM++ 2M Karras", 12345, 0.7, 1024, 1024
    ],
    [
        "Studio Ghibli style illustration of a cozy bakery storefront on a rainy day, warm lighting, detailed",
        default_negative_prompt, 30, 6.0, "Euler a", 54321, 0.7, 1024, 1024
    ],
    [
        "Macro photograph of a dewdrop on a spider web, intricate details, shallow depth of field, natural lighting",
        "blurry, unfocused, cartoon", 50, 8.0, "DPM++ 2M Karras", -1, 0.7, 1024, 1024 # Random seed
    ],
    [
        "Steampunk astronaut exploring an alien jungle landscape, brass and copper details, vibrant bioluminescent plants, wide angle",
        default_negative_prompt, 35, 7.0, "DPM++ SDE Karras", 98765, 0.7, 1216, 832 # Different aspect ratio
    ],
    [
        "Abstract geometric art, vibrant contrasting colors, sharp edges, minimalistic design, 4k wallpaper",
        "photorealistic, noisy, cluttered", 25, 5.0, "Euler", -1, 0.7, 1024, 1024
    ],
     [
        "Кот в очках читает книгу у камина", # Example in Russian for translation testing
        default_negative_prompt, 35, 7.0, "DPM++ 2M Karras", 11223, 0.7, 1024, 1024
    ]
]


# Build the Gradio UI with Blocks
# Use the custom theme if it's available, otherwise fallback to default
try:
    theme = gr.themes.Base.load('Nymbo/Nymbo_Theme')
except Exception:
    print("Could not load Nymbo/Nymbo_Theme, using default theme.")
    theme = gr.themes.Default()

with gr.Blocks(theme=theme, css=css) as app:
    # Add a title to the app
    gr.HTML("<center><h1>FLUX.1-Dev Image Generator</h1></center>")

    # Container for all the UI elements
    with gr.Column(elem_id="app-container"):
        # Add a text input for the main prompt
        with gr.Row():
            with gr.Column(scale=3): # Give prompt more width
                text_prompt = gr.Textbox(label="Prompt", placeholder="Enter a prompt here (English or Russian)", lines=3, elem_id="prompt-text-input") # Increased lines

            with gr.Column(scale=1): # Negative prompt column
                 negative_prompt = gr.Textbox(label="Negative Prompt", placeholder="What to avoid...", value=default_negative_prompt, lines=3, elem_id="negative-prompt-text-input")

        # Accordion for advanced settings
        with gr.Accordion("Advanced Settings", open=False):
             with gr.Row():
                 width = gr.Slider(label="Width", value=1024, minimum=256, maximum=1216, step=64) # Adjusted steps/min
                 height = gr.Slider(label="Height", value=1024, minimum=256, maximum=1216, step=64) # Adjusted steps/min
             with gr.Row():
                 steps = gr.Slider(label="Sampling steps", value=35, minimum=10, maximum=100, step=1) # Adjusted min
                 cfg = gr.Slider(label="CFG Scale", value=7.0, minimum=1.0, maximum=20.0, step=0.5) # Added step
                 strength = gr.Slider(label="Strength (Img2Img)", value=0.7, minimum=0.0, maximum=1.0, step=0.01, info="Primarily for Image-to-Image tasks, may have limited effect here.") # Added step and info
             with gr.Row():
                 seed = gr.Slider(label="Seed (-1 for random)", value=-1, minimum=-1, maximum=2**32 - 1, step=1) # Increased max seed
                 method = gr.Radio(
                     label="Sampling method (Scheduler)",
                     value="DPM++ 2M Karras",
                     choices=["DPM++ 2M Karras", "DPM++ SDE Karras", "Euler", "Euler a", "Heun", "DDIM"],
                     info="Note: Model API might use its default scheduler regardless of selection."
                 )

        # Add a button to trigger the image generation
        with gr.Row():
            text_button = gr.Button("Generate Image", variant='primary', elem_id="gen-button")

        # Image output area to display the generated image
        with gr.Row():
            image_output = gr.Image(type="pil", label="Generated Image", elem_id="gallery") # Changed label


        # --- Add Examples Component Here ---
        with gr.Row():
             gr.Examples(
                 examples=example_list,
                 # List ALL components that correspond to the example list order
                 inputs=[text_prompt, negative_prompt, steps, cfg, method, seed, strength, width, height],
                 outputs=image_output, # The component to display the output
                 fn=query, # The function to run TO GENERATE the examples for caching
                 cache_examples=True, # << ENABLE CACHING HERE
                 label="Examples (Click to Run & View Cached Result)", # Customize label
                 examples_per_page=6 # Adjust how many show per page
                 # run_on_click=True # Optional: Set True if you want clicking to trigger 'query' again, even if cached. Usually False when caching.
             )

        # Bind the main button to the query function
        # Ensure the order matches the function definition (excluding self if it were a class method)
        text_button.click(
            query,
            inputs=[text_prompt, negative_prompt, steps, cfg, method, seed, strength, width, height],
            outputs=image_output,
            api_name="generate_image" # Optional: Set API name for potential external calls
        )


# Launch the Gradio app
# share=False is correct for Hugging Face Spaces deployment
# show_api=False is fine unless you specifically need to expose the API endpoint documentation
# debug=True can be useful for development but remove for production/sharing
app.launch(show_api=False, share=False)


Key Changes:

Import typing: Added Callable, List, Any, Literal for better type hinting, though not strictly required.

Error Handling in query: Made error handling slightly more robust, checking for token availability, using response.raise_for_status(), catching specific request exceptions (like Timeout, 503), and providing clearer Gradio errors (gr.Error, gr.Warning). Also added basic translation error handling.

API Payload: Adjusted the payload structure slightly based on common inference API patterns (e.g., num_inference_steps, guidance_scale). Added notes that the specific model might ignore some parameters. Handles -1 seed better.

Default Negative Prompt: Stored the default negative prompt in a variable for reuse in examples.

example_list: Defined a list of lists. Each inner list contains values for all the inputs to the query function, in the correct order. Includes diverse prompts and some parameter variations. Added a Russian example.

gr.Examples Instantiation:

Placed gr.Examples(...) within the gr.Blocks context, after the main input/output components.

examples=example_list: Passed the defined list.

inputs=[...]: Listed all the input components (gr.Textbox, gr.Slider, etc.) in the exact order corresponding to the data in example_list.

outputs=image_output: Specified the output component.

fn=query: Crucially, provided the query function. This tells Gradio how to generate the results for caching.

cache_examples=True: This enables the caching mechanism.

Added label and examples_per_page for better UI.

run_on_click is typically False or omitted when cache_examples=True, as the point is to show the pre-computed result. Set it to True only if you want clicking an example to re-run the generation even if it's cached (useful if you want users to easily try variations from an example starting point).

UI Tweaks: Increased prompt textbox lines, adjusted slider steps/ranges, added info text to some sliders/radios.

Theme Loading: Added a try...except block for loading the custom theme to fall back gracefully if it's not found.

API Token Handling: Added basic handling for multiple tokens via an environment variable HF_EXTRA_TOKENS (comma-separated) and rotation.

Before Running:

Update requirements.txt: Ensure gradio (version >= 4.x recommended for latest features/fixes), requests, pillow, deep-translator are listed. You likely don't need langdetect anymore if you removed it.

requests
pillow
deep-translator
gradio>=4.44.1 # Use the version suggested or newer
IGNORE_WHEN_COPYING_START
content_copy
download
Use code with caution.
Txt
IGNORE_WHEN_COPYING_END

Set Environment Variables: Make sure HF_READ_TOKEN is set in your Space secrets. Optionally set HF_EXTRA_TOKENS if you have more tokens.

Commit and Push: Save app.py and requirements.txt, commit, and push to your Space.

The first time the Space builds after these changes, it will take longer as it runs query for each example to build the cache. Subsequent loads will be faster, and clicking examples will show results instantly.