prithivMLmods commited on
Commit
c2ddc59
1 Parent(s): 39e1f39

You need to be a Pro Member in Colab 'else' You will face Malloc() Issues [ Less Memory Issue ]

Browse files
Files changed (1) hide show
  1. Flux-RealPix.ipynb +446 -0
Flux-RealPix.ipynb ADDED
@@ -0,0 +1,446 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 0,
4
+ "metadata": {
5
+ "colab": {
6
+ "provenance": [],
7
+ "gpuType": "T4"
8
+ },
9
+ "kernelspec": {
10
+ "name": "python3",
11
+ "display_name": "Python 3"
12
+ },
13
+ "language_info": {
14
+ "name": "python"
15
+ },
16
+ "accelerator": "GPU"
17
+ },
18
+ "cells": [
19
+ {
20
+ "cell_type": "code",
21
+ "execution_count": 1,
22
+ "metadata": {
23
+ "colab": {
24
+ "base_uri": "https://localhost:8080/"
25
+ },
26
+ "id": "L1CPm6HAZuTg",
27
+ "outputId": "3a85cb03-b18f-4e5b-8c5f-2f577bc46598"
28
+ },
29
+ "outputs": [
30
+ {
31
+ "output_type": "stream",
32
+ "name": "stdout",
33
+ "text": [
34
+ "Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (2.4.0+cu121)\n",
35
+ "Collecting diffusers\n",
36
+ " Downloading diffusers-0.30.2-py3-none-any.whl.metadata (18 kB)\n",
37
+ "Collecting spaces\n",
38
+ " Downloading spaces-0.30.2-py3-none-any.whl.metadata (1.0 kB)\n",
39
+ "Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.44.2)\n",
40
+ "Collecting peft\n",
41
+ " Downloading peft-0.12.0-py3-none-any.whl.metadata (13 kB)\n",
42
+ "Requirement already satisfied: sentencepiece in /usr/local/lib/python3.10/dist-packages (0.1.99)\n",
43
+ "Collecting gradio\n",
44
+ " Downloading gradio-4.43.0-py3-none-any.whl.metadata (15 kB)\n",
45
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch) (3.15.4)\n",
46
+ "Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.10/dist-packages (from torch) (4.12.2)\n",
47
+ "Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch) (1.13.2)\n",
48
+ "Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch) (3.3)\n",
49
+ "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch) (3.1.4)\n",
50
+ "Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch) (2024.6.1)\n",
51
+ "Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.10/dist-packages (from diffusers) (8.4.0)\n",
52
+ "Requirement already satisfied: huggingface-hub>=0.23.2 in /usr/local/lib/python3.10/dist-packages (from diffusers) (0.24.6)\n",
53
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from diffusers) (1.26.4)\n",
54
+ "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from diffusers) (2024.5.15)\n",
55
+ "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from diffusers) (2.32.3)\n",
56
+ "Requirement already satisfied: safetensors>=0.3.1 in /usr/local/lib/python3.10/dist-packages (from diffusers) (0.4.4)\n",
57
+ "Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from diffusers) (9.4.0)\n",
58
+ "Collecting httpx>=0.20 (from spaces)\n",
59
+ " Downloading httpx-0.27.2-py3-none-any.whl.metadata (7.1 kB)\n",
60
+ "Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from spaces) (24.1)\n",
61
+ "Requirement already satisfied: psutil<6,>=2 in /usr/local/lib/python3.10/dist-packages (from spaces) (5.9.5)\n",
62
+ "Requirement already satisfied: pydantic<3,>=1 in /usr/local/lib/python3.10/dist-packages (from spaces) (2.8.2)\n",
63
+ "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (6.0.2)\n",
64
+ "Requirement already satisfied: tokenizers<0.20,>=0.19 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.19.1)\n",
65
+ "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers) (4.66.5)\n",
66
+ "Requirement already satisfied: accelerate>=0.21.0 in /usr/local/lib/python3.10/dist-packages (from peft) (0.33.0)\n",
67
+ "Collecting aiofiles<24.0,>=22.0 (from gradio)\n",
68
+ " Downloading aiofiles-23.2.1-py3-none-any.whl.metadata (9.7 kB)\n",
69
+ "Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/lib/python3.10/dist-packages (from gradio) (3.7.1)\n",
70
+ "Collecting fastapi<0.113.0 (from gradio)\n",
71
+ " Downloading fastapi-0.112.4-py3-none-any.whl.metadata (27 kB)\n",
72
+ "Collecting ffmpy (from gradio)\n",
73
+ " Downloading ffmpy-0.4.0-py3-none-any.whl.metadata (2.9 kB)\n",
74
+ "Collecting gradio-client==1.3.0 (from gradio)\n",
75
+ " Downloading gradio_client-1.3.0-py3-none-any.whl.metadata (7.1 kB)\n",
76
+ "Requirement already satisfied: importlib-resources<7.0,>=1.3 in /usr/local/lib/python3.10/dist-packages (from gradio) (6.4.4)\n",
77
+ "Requirement already satisfied: markupsafe~=2.0 in /usr/local/lib/python3.10/dist-packages (from gradio) (2.1.5)\n",
78
+ "Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.10/dist-packages (from gradio) (3.7.1)\n",
79
+ "Collecting orjson~=3.0 (from gradio)\n",
80
+ " Downloading orjson-3.10.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (50 kB)\n",
81
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.4/50.4 kB\u001b[0m \u001b[31m1.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
82
+ "\u001b[?25hRequirement already satisfied: pandas<3.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from gradio) (2.1.4)\n",
83
+ "Collecting pydub (from gradio)\n",
84
+ " Downloading pydub-0.25.1-py2.py3-none-any.whl.metadata (1.4 kB)\n",
85
+ "Collecting python-multipart>=0.0.9 (from gradio)\n",
86
+ " Downloading python_multipart-0.0.9-py3-none-any.whl.metadata (2.5 kB)\n",
87
+ "Collecting ruff>=0.2.2 (from gradio)\n",
88
+ " Downloading ruff-0.6.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (25 kB)\n",
89
+ "Collecting semantic-version~=2.0 (from gradio)\n",
90
+ " Downloading semantic_version-2.10.0-py2.py3-none-any.whl.metadata (9.7 kB)\n",
91
+ "Collecting tomlkit==0.12.0 (from gradio)\n",
92
+ " Downloading tomlkit-0.12.0-py3-none-any.whl.metadata (2.7 kB)\n",
93
+ "Requirement already satisfied: typer<1.0,>=0.12 in /usr/local/lib/python3.10/dist-packages (from gradio) (0.12.5)\n",
94
+ "Requirement already satisfied: urllib3~=2.0 in /usr/local/lib/python3.10/dist-packages (from gradio) (2.0.7)\n",
95
+ "Collecting uvicorn>=0.14.0 (from gradio)\n",
96
+ " Downloading uvicorn-0.30.6-py3-none-any.whl.metadata (6.6 kB)\n",
97
+ "Collecting websockets<13.0,>=10.0 (from gradio-client==1.3.0->gradio)\n",
98
+ " Downloading websockets-12.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)\n",
99
+ "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5.0,>=3.0->gradio) (3.8)\n",
100
+ "Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.10/dist-packages (from anyio<5.0,>=3.0->gradio) (1.3.1)\n",
101
+ "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5.0,>=3.0->gradio) (1.2.2)\n",
102
+ "Collecting starlette<0.39.0,>=0.37.2 (from fastapi<0.113.0->gradio)\n",
103
+ " Downloading starlette-0.38.5-py3-none-any.whl.metadata (6.0 kB)\n",
104
+ "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx>=0.20->spaces) (2024.8.30)\n",
105
+ "Collecting httpcore==1.* (from httpx>=0.20->spaces)\n",
106
+ " Downloading httpcore-1.0.5-py3-none-any.whl.metadata (20 kB)\n",
107
+ "Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx>=0.20->spaces)\n",
108
+ " Downloading h11-0.14.0-py3-none-any.whl.metadata (8.2 kB)\n",
109
+ "Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio) (1.3.0)\n",
110
+ "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio) (0.12.1)\n",
111
+ "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio) (4.53.1)\n",
112
+ "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio) (1.4.5)\n",
113
+ "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio) (3.1.4)\n",
114
+ "Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio) (2.8.2)\n",
115
+ "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas<3.0,>=1.0->gradio) (2024.1)\n",
116
+ "Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.10/dist-packages (from pandas<3.0,>=1.0->gradio) (2024.1)\n",
117
+ "Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1->spaces) (0.7.0)\n",
118
+ "Requirement already satisfied: pydantic-core==2.20.1 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1->spaces) (2.20.1)\n",
119
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->diffusers) (3.3.2)\n",
120
+ "Requirement already satisfied: click>=8.0.0 in /usr/local/lib/python3.10/dist-packages (from typer<1.0,>=0.12->gradio) (8.1.7)\n",
121
+ "Requirement already satisfied: shellingham>=1.3.0 in /usr/local/lib/python3.10/dist-packages (from typer<1.0,>=0.12->gradio) (1.5.4)\n",
122
+ "Requirement already satisfied: rich>=10.11.0 in /usr/local/lib/python3.10/dist-packages (from typer<1.0,>=0.12->gradio) (13.8.0)\n",
123
+ "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.10/dist-packages (from importlib-metadata->diffusers) (3.20.1)\n",
124
+ "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from sympy->torch) (1.3.0)\n",
125
+ "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib~=3.0->gradio) (1.16.0)\n",
126
+ "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from rich>=10.11.0->typer<1.0,>=0.12->gradio) (3.0.0)\n",
127
+ "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.10/dist-packages (from rich>=10.11.0->typer<1.0,>=0.12->gradio) (2.16.1)\n",
128
+ "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0,>=0.12->gradio) (0.1.2)\n",
129
+ "Downloading diffusers-0.30.2-py3-none-any.whl (2.6 MB)\n",
130
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.6/2.6 MB\u001b[0m \u001b[31m38.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
131
+ "\u001b[?25hDownloading spaces-0.30.2-py3-none-any.whl (27 kB)\n",
132
+ "Downloading peft-0.12.0-py3-none-any.whl (296 kB)\n",
133
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m296.4/296.4 kB\u001b[0m \u001b[31m21.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
134
+ "\u001b[?25hDownloading gradio-4.43.0-py3-none-any.whl (18.1 MB)\n",
135
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m18.1/18.1 MB\u001b[0m \u001b[31m67.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
136
+ "\u001b[?25hDownloading gradio_client-1.3.0-py3-none-any.whl (318 kB)\n",
137
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m318.7/318.7 kB\u001b[0m \u001b[31m20.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
138
+ "\u001b[?25hDownloading tomlkit-0.12.0-py3-none-any.whl (37 kB)\n",
139
+ "Downloading aiofiles-23.2.1-py3-none-any.whl (15 kB)\n",
140
+ "Downloading fastapi-0.112.4-py3-none-any.whl (93 kB)\n",
141
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m93.9/93.9 kB\u001b[0m \u001b[31m2.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
142
+ "\u001b[?25hDownloading httpx-0.27.2-py3-none-any.whl (76 kB)\n",
143
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.4/76.4 kB\u001b[0m \u001b[31m2.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
144
+ "\u001b[?25hDownloading httpcore-1.0.5-py3-none-any.whl (77 kB)\n",
145
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
146
+ "\u001b[?25hDownloading orjson-3.10.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (141 kB)\n",
147
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m141.9/141.9 kB\u001b[0m \u001b[31m12.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
148
+ "\u001b[?25hDownloading python_multipart-0.0.9-py3-none-any.whl (22 kB)\n",
149
+ "Downloading ruff-0.6.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.3 MB)\n",
150
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m10.3/10.3 MB\u001b[0m \u001b[31m82.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
151
+ "\u001b[?25hDownloading semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)\n",
152
+ "Downloading uvicorn-0.30.6-py3-none-any.whl (62 kB)\n",
153
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.8/62.8 kB\u001b[0m \u001b[31m4.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
154
+ "\u001b[?25hDownloading ffmpy-0.4.0-py3-none-any.whl (5.8 kB)\n",
155
+ "Downloading pydub-0.25.1-py2.py3-none-any.whl (32 kB)\n",
156
+ "Downloading h11-0.14.0-py3-none-any.whl (58 kB)\n",
157
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
158
+ "\u001b[?25hDownloading starlette-0.38.5-py3-none-any.whl (71 kB)\n",
159
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.4/71.4 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
160
+ "\u001b[?25hDownloading websockets-12.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (130 kB)\n",
161
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m130.2/130.2 kB\u001b[0m \u001b[31m11.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
162
+ "\u001b[?25hInstalling collected packages: pydub, websockets, tomlkit, semantic-version, ruff, python-multipart, orjson, h11, ffmpy, aiofiles, uvicorn, starlette, httpcore, httpx, fastapi, diffusers, gradio-client, peft, gradio, spaces\n",
163
+ " Attempting uninstall: tomlkit\n",
164
+ " Found existing installation: tomlkit 0.13.2\n",
165
+ " Uninstalling tomlkit-0.13.2:\n",
166
+ " Successfully uninstalled tomlkit-0.13.2\n",
167
+ "Successfully installed aiofiles-23.2.1 diffusers-0.30.2 fastapi-0.112.4 ffmpy-0.4.0 gradio-4.43.0 gradio-client-1.3.0 h11-0.14.0 httpcore-1.0.5 httpx-0.27.2 orjson-3.10.7 peft-0.12.0 pydub-0.25.1 python-multipart-0.0.9 ruff-0.6.4 semantic-version-2.10.0 spaces-0.30.2 starlette-0.38.5 tomlkit-0.12.0 uvicorn-0.30.6 websockets-12.0\n"
168
+ ]
169
+ }
170
+ ],
171
+ "source": [
172
+ "!pip install torch diffusers spaces transformers peft sentencepiece gradio"
173
+ ]
174
+ },
175
+ {
176
+ "cell_type": "code",
177
+ "source": [
178
+ "# Authenticate with Hugging Face\n",
179
+ "from huggingface_hub import login\n",
180
+ "\n",
181
+ "# Log in to Hugging Face using the provided token\n",
182
+ "hf_token = 'hf-token-authentication'\n",
183
+ "login(hf_token)"
184
+ ],
185
+ "metadata": {
186
+ "colab": {
187
+ "base_uri": "https://localhost:8080/"
188
+ },
189
+ "id": "1Q39l67NZ4F-",
190
+ "outputId": "d5c36fff-c230-4101-917b-8bb13a717d40"
191
+ },
192
+ "execution_count": 2,
193
+ "outputs": [
194
+ {
195
+ "output_type": "stream",
196
+ "name": "stdout",
197
+ "text": [
198
+ "The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.\n",
199
+ "Token is valid (permission: fineGrained).\n",
200
+ "Your token has been saved to /root/.cache/huggingface/token\n",
201
+ "Login successful\n"
202
+ ]
203
+ }
204
+ ]
205
+ },
206
+ {
207
+ "cell_type": "code",
208
+ "source": [
209
+ "import spaces\n",
210
+ "import gradio as gr\n",
211
+ "import torch\n",
212
+ "from PIL import Image\n",
213
+ "from diffusers import DiffusionPipeline\n",
214
+ "import random\n",
215
+ "import uuid\n",
216
+ "from typing import Tuple\n",
217
+ "import numpy as np\n",
218
+ "\n",
219
+ "DESCRIPTIONz = \"\"\"## FLUX REALISM 🔥\"\"\"\n",
220
+ "\n",
221
+ "def save_image(img):\n",
222
+ " unique_name = str(uuid.uuid4()) + \".png\"\n",
223
+ " img.save(unique_name)\n",
224
+ " return unique_name\n",
225
+ "\n",
226
+ "def randomize_seed_fn(seed: int, randomize_seed: bool) -> int:\n",
227
+ " if randomize_seed:\n",
228
+ " seed = random.randint(0, MAX_SEED)\n",
229
+ " return seed\n",
230
+ "\n",
231
+ "MAX_SEED = np.iinfo(np.int32).max\n",
232
+ "\n",
233
+ "if not torch.cuda.is_available():\n",
234
+ " DESCRIPTIONz += \"\\n<p>⚠️Running on CPU, This may not work on CPU.</p>\"\n",
235
+ "\n",
236
+ "base_model = \"black-forest-labs/FLUX.1-dev\"\n",
237
+ "pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)\n",
238
+ "\n",
239
+ "lora_repo = \"prithivMLmods/Canopus-LoRA-Flux-FaceRealism\"\n",
240
+ "trigger_word = \"Realism\" # Leave trigger_word blank if not used.\n",
241
+ "pipe.load_lora_weights(lora_repo)\n",
242
+ "\n",
243
+ "pipe.to(\"cuda\")\n",
244
+ "\n",
245
+ "style_list = [\n",
246
+ " {\n",
247
+ " \"name\": \"3840 x 2160\",\n",
248
+ " \"prompt\": \"hyper-realistic 8K image of {prompt}. ultra-detailed, lifelike, high-resolution, sharp, vibrant colors, photorealistic\",\n",
249
+ " },\n",
250
+ " {\n",
251
+ " \"name\": \"2560 x 1440\",\n",
252
+ " \"prompt\": \"hyper-realistic 4K image of {prompt}. ultra-detailed, lifelike, high-resolution, sharp, vibrant colors, photorealistic\",\n",
253
+ " },\n",
254
+ " {\n",
255
+ " \"name\": \"HD+\",\n",
256
+ " \"prompt\": \"hyper-realistic 2K image of {prompt}. ultra-detailed, lifelike, high-resolution, sharp, vibrant colors, photorealistic\",\n",
257
+ " },\n",
258
+ " {\n",
259
+ " \"name\": \"Style Zero\",\n",
260
+ " \"prompt\": \"{prompt}\",\n",
261
+ " },\n",
262
+ "]\n",
263
+ "\n",
264
+ "styles = {k[\"name\"]: k[\"prompt\"] for k in style_list}\n",
265
+ "\n",
266
+ "DEFAULT_STYLE_NAME = \"3840 x 2160\"\n",
267
+ "STYLE_NAMES = list(styles.keys())\n",
268
+ "\n",
269
+ "def apply_style(style_name: str, positive: str) -> str:\n",
270
+ " return styles.get(style_name, styles[DEFAULT_STYLE_NAME]).replace(\"{prompt}\", positive)\n",
271
+ "\n",
272
+ "@spaces.GPU(duration=60, enable_queue=True)\n",
273
+ "def generate(\n",
274
+ " prompt: str,\n",
275
+ " seed: int = 0,\n",
276
+ " width: int = 1024,\n",
277
+ " height: int = 1024,\n",
278
+ " guidance_scale: float = 3,\n",
279
+ " randomize_seed: bool = False,\n",
280
+ " style_name: str = DEFAULT_STYLE_NAME,\n",
281
+ " progress=gr.Progress(track_tqdm=True),\n",
282
+ "):\n",
283
+ " seed = int(randomize_seed_fn(seed, randomize_seed))\n",
284
+ "\n",
285
+ " positive_prompt = apply_style(style_name, prompt)\n",
286
+ "\n",
287
+ " if trigger_word:\n",
288
+ " positive_prompt = f\"{trigger_word} {positive_prompt}\"\n",
289
+ "\n",
290
+ " images = pipe(\n",
291
+ " prompt=positive_prompt,\n",
292
+ " width=width,\n",
293
+ " height=height,\n",
294
+ " guidance_scale=guidance_scale,\n",
295
+ " num_inference_steps=16,\n",
296
+ " num_images_per_prompt=1,\n",
297
+ " output_type=\"pil\",\n",
298
+ " ).images\n",
299
+ " image_paths = [save_image(img) for img in images]\n",
300
+ " print(image_paths)\n",
301
+ " return image_paths, seed\n",
302
+ "\n",
303
+ "\n",
304
+ "def load_predefined_images():\n",
305
+ " predefined_images = [\n",
306
+ " \"assets/11.png\",\n",
307
+ " \"assets/22.png\",\n",
308
+ " \"assets/33.png\",\n",
309
+ " \"assets/44.png\",\n",
310
+ " \"assets/55.webp\",\n",
311
+ " \"assets/66.png\",\n",
312
+ " \"assets/77.png\",\n",
313
+ " \"assets/88.png\",\n",
314
+ " \"assets/99.png\",\n",
315
+ " ]\n",
316
+ " return predefined_images\n",
317
+ "\n",
318
+ "\n",
319
+ "\n",
320
+ "examples = [\n",
321
+ " \"A portrait of an attractive woman in her late twenties with light brown hair and purple, wearing large a a yellow sweater. She is looking directly at the camera, standing outdoors near trees.. --ar 128:85 --v 6.0 --style raw\",\n",
322
+ " \"A photo of the model wearing a white bodysuit and beige trench coat, posing in front of a train station with hands on head, soft light, sunset, fashion photography, high resolution, 35mm lens, f/22, natural lighting, global illumination. --ar 85:128 --v 6.0 --style raw\",\n",
323
+ "]\n",
324
+ "\n",
325
+ "\n",
326
+ "css = '''\n",
327
+ ".gradio-container{max-width: 575px !important}\n",
328
+ "h1{text-align:center}\n",
329
+ "footer {\n",
330
+ " visibility: hidden\n",
331
+ "}\n",
332
+ "'''\n",
333
+ "\n",
334
+ "with gr.Blocks(css=css, theme=\"bethecloud/storj_theme\") as demo:\n",
335
+ " gr.Markdown(DESCRIPTIONz)\n",
336
+ " with gr.Row():\n",
337
+ " prompt = gr.Text(\n",
338
+ " label=\"Prompt\",\n",
339
+ " show_label=False,\n",
340
+ " max_lines=1,\n",
341
+ " placeholder=\"Enter your prompt\",\n",
342
+ " container=False,\n",
343
+ " )\n",
344
+ " run_button = gr.Button(\"Run\", scale=0)\n",
345
+ " result = gr.Gallery(label=\"Result\", columns=1, show_label=False)\n",
346
+ "\n",
347
+ " with gr.Accordion(\"Advanced options\", open=False, visible=True):\n",
348
+ " seed = gr.Slider(\n",
349
+ " label=\"Seed\",\n",
350
+ " minimum=0,\n",
351
+ " maximum=MAX_SEED,\n",
352
+ " step=1,\n",
353
+ " value=0,\n",
354
+ " visible=True\n",
355
+ " )\n",
356
+ " randomize_seed = gr.Checkbox(label=\"Randomize seed\", value=True)\n",
357
+ "\n",
358
+ " with gr.Row(visible=True):\n",
359
+ " width = gr.Slider(\n",
360
+ " label=\"Width\",\n",
361
+ " minimum=512,\n",
362
+ " maximum=2048,\n",
363
+ " step=64,\n",
364
+ " value=1024,\n",
365
+ " )\n",
366
+ " height = gr.Slider(\n",
367
+ " label=\"Height\",\n",
368
+ " minimum=512,\n",
369
+ " maximum=2048,\n",
370
+ " step=64,\n",
371
+ " value=1024,\n",
372
+ " )\n",
373
+ "\n",
374
+ " with gr.Row():\n",
375
+ " guidance_scale = gr.Slider(\n",
376
+ " label=\"Guidance Scale\",\n",
377
+ " minimum=0.1,\n",
378
+ " maximum=20.0,\n",
379
+ " step=0.1,\n",
380
+ " value=3.0,\n",
381
+ " )\n",
382
+ " num_inference_steps = gr.Slider(\n",
383
+ " label=\"Number of inference steps\",\n",
384
+ " minimum=1,\n",
385
+ " maximum=40,\n",
386
+ " step=1,\n",
387
+ " value=16,\n",
388
+ " )\n",
389
+ "\n",
390
+ " style_selection = gr.Radio(\n",
391
+ " show_label=True,\n",
392
+ " container=True,\n",
393
+ " interactive=True,\n",
394
+ " choices=STYLE_NAMES,\n",
395
+ " value=DEFAULT_STYLE_NAME,\n",
396
+ " label=\"Quality Style\",\n",
397
+ " )\n",
398
+ "\n",
399
+ "\n",
400
+ "\n",
401
+ " gr.Examples(\n",
402
+ " examples=examples,\n",
403
+ " inputs=prompt,\n",
404
+ " outputs=[result, seed],\n",
405
+ " fn=generate,\n",
406
+ " cache_examples=False,\n",
407
+ " )\n",
408
+ "\n",
409
+ " gr.on(\n",
410
+ " triggers=[\n",
411
+ " prompt.submit,\n",
412
+ " run_button.click,\n",
413
+ " ],\n",
414
+ " fn=generate,\n",
415
+ " inputs=[\n",
416
+ " prompt,\n",
417
+ " seed,\n",
418
+ " width,\n",
419
+ " height,\n",
420
+ " guidance_scale,\n",
421
+ " randomize_seed,\n",
422
+ " style_selection,\n",
423
+ " ],\n",
424
+ " outputs=[result, seed],\n",
425
+ " api_name=\"run\",\n",
426
+ " )\n",
427
+ "\n",
428
+ " gr.Markdown(\"### Generated Images\")\n",
429
+ " predefined_gallery = gr.Gallery(label=\"Generated Images\", columns=3, show_label=False, value=load_predefined_images())\n",
430
+ " gr.Markdown(\"**Disclaimer/Note:**\")\n",
431
+ "\n",
432
+ " gr.Markdown(\"🔥This space provides realistic image generation, which works better for human faces and portraits. Realistic trigger works properly, better for photorealistic trigger words, close-up shots, face diffusion, male, female characters.\")\n",
433
+ "\n",
434
+ " gr.Markdown(\"🔥users are accountable for the content they generate and are responsible for ensuring it meets appropriate ethical standards.\")\n",
435
+ "\n",
436
+ "if __name__ == \"__main__\":\n",
437
+ " demo.queue(max_size=40).launch()"
438
+ ],
439
+ "metadata": {
440
+ "id": "35LXnZWVaBZ_"
441
+ },
442
+ "execution_count": null,
443
+ "outputs": []
444
+ }
445
+ ]
446
+ }