.gitattributes CHANGED
@@ -35,4 +35,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  *.jpg filter=lfs diff=lfs merge=lfs -text
37
  *.png filter=lfs diff=lfs merge=lfs -text
38
- *.mp4 filter=lfs diff=lfs merge=lfs -text
 
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  *.jpg filter=lfs diff=lfs merge=lfs -text
37
  *.png filter=lfs diff=lfs merge=lfs -text
 
Makefile CHANGED
@@ -21,9 +21,4 @@ dev:
21
 
22
  hf:
23
  chmod 777 hf.sh
24
- ./hf.sh
25
-
26
- requirements:
27
- uv pip compile --no-annotate pyproject.toml --no-deps --no-strip-extras --no-header \
28
- | sed -E 's/([a-zA-Z0-9_-]+(\[[a-zA-Z0-9_,-]+\])?)[=><~!].*/\1/g' \
29
- > requirements.txt
 
21
 
22
  hf:
23
  chmod 777 hf.sh
24
+ ./hf.sh
 
 
 
 
 
README.md CHANGED
@@ -1,79 +1,79 @@
1
  ---
2
  title: ImageAlfred
3
  emoji: 😻
4
- tags:
5
- - mcp-server-track
6
  colorFrom: green
7
  colorTo: purple
8
  sdk: gradio
9
- sdk_version: 5.33.0
10
  app_file: src/app.py
11
  pinned: false
12
  license: apache-2.0
13
  short_description: 'Alfred of Images: An MCP server to handle your image edits.'
14
  ---
15
 
16
- <div align="center">
17
- <a href="https://github.com/mahan-ym/ImageAlfred">
18
- <img src="./src/assets/icons/ImageAlfredIcon.png" alt="ImageAlfred" width=200 height=200>
19
 
20
- <span><img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white"></span>
21
 
22
- </a>
23
- <h1>Image Alfred</h1>
24
 
25
- ImageAlfred is an image Model Context Protocol (MCP) tool designed to streamline image processing workflows
26
-
27
- <img alt="Python Version from PEP 621 TOML" src="https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fmahan-ym%2FImageAlfred%2Fmain%2Fpyproject.toml">
28
- <img src="https://badge.mcpx.dev?type=server" title="MCP Server"/>
29
- <img alt="GitHub License" src="https://img.shields.io/github/license/mahan-ym/ImageAlfred">
30
-
31
- <a href=https://huggingface.co> <img src="src/assets/icons/hf-logo.svg" alt="huggingface" height=40> </a>
32
- <a href="https://www.python.org"><img src="src/assets/icons/python-logo-only.svg" alt="python" height=40></a>
33
- </div>
34
-
35
- ## Demo
36
-
37
- [🎬 Video demo](https://youtu.be/tEov-Bcuulk)
38
 
39
  ## Maintainers
 
 
40
 
41
- [Mahan-ym | Mahan Yarmohammad](https://www.mahan-ym.com/)
42
-
43
- [Soodoo | Saaed Saadatipour](https://soodoo.me/)
44
-
45
- ## Tools
46
-
47
- - [Gradio](https://www.gradio.app/): Serving user interface and MCP server.
48
- - [Modal.com](https://modal.com/): AI infrastructure making all the magic 🔮 possible.
49
- - [SAM](https://segment-anything.com/): Segment Anything model by meta for image segmentation and mask generation.
50
- - [CLIPSeg](https://github.com/timojl/clipseg): Image Segmentation using CLIP. We used it as a more precise object detection model.
51
- - [OWLv2](https://huggingface.co/google/owlv2-large-patch14-ensemble): Zero-Shot object detection (Better performance in license plate detection and privacy preserving use-cases)
52
  - [HuggingFace](https://huggingface.co/): Downloading SAM and using Space for hosting.
 
53
 
54
  ## Getting Started
55
 
56
  ### Prerequisites
57
-
58
- - Python 3.12+
59
  - [uv](https://github.com/astral-sh/uv) (a fast Python package installer and virtual environment manager)
60
 
61
  ### Installation
62
 
63
- It will create virtual environment, activate it, install dependecies and setup modal
64
 
65
  ```bash
66
- make install
67
  ```
68
 
69
- ### Running the App
70
 
71
- This will launch the Gradio interface for ImageAlfred.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ```bash
74
- make run
75
  ```
76
 
 
 
77
  ## License
78
 
79
  This project is licensed under the terms of the LICENSE file in this repository.
 
1
  ---
2
  title: ImageAlfred
3
  emoji: 😻
 
 
4
  colorFrom: green
5
  colorTo: purple
6
  sdk: gradio
7
+ sdk_version: 5.32.1
8
  app_file: src/app.py
9
  pinned: false
10
  license: apache-2.0
11
  short_description: 'Alfred of Images: An MCP server to handle your image edits.'
12
  ---
13
 
14
+ ![Image Alfred](./src/assets/ImageAlfredIcon.png)
 
 
15
 
16
+ # ImageAlfred
17
 
18
+ ImageAlfred is an image Model Context Protocol (MCP) tool designed to streamline image processing workflows.
19
+ <!-- It provides a user-friendly interface for interacting with image models, leveraging the power of Gradio for the frontend and Modal for scalable backend deployment. -->
20
 
21
+ <!-- ## Features
22
+ - Intuitive web interface for image processing
23
+ - Powered by Gradio for rapid prototyping and UI
24
+ - Scalable and serverless execution with Modal
25
+ - Easily extendable for custom image models and workflows -->
 
 
 
 
 
 
 
 
26
 
27
  ## Maintainers
28
+ [Mahan Yarmohammad (Mahan-ym)](https://www.mahan-ym.com/)
29
+ [Saaed Saadatipour (Soodoo)](https://soodoo.me/)
30
 
31
+ # Used Tools
32
+ - [Gradio](https://www.gradio.app/): Serving user interface and MCP server
33
+ - [lang-segment-anything](https://github.com/luca-medeiros/lang-segment-anything): Which uses [SAM](https://segment-anything.com/) and [Grounding Dino](https://github.com/IDEA-Research/GroundingDINO) under the hood to segment images.
 
 
 
 
 
 
 
 
34
  - [HuggingFace](https://huggingface.co/): Downloading SAM and using Space for hosting.
35
+ - [Modal.com](https://modal.com/): AI infrastructure making all the magic possible.
36
 
37
  ## Getting Started
38
 
39
  ### Prerequisites
40
+ - Python 3.13+
 
41
  - [uv](https://github.com/astral-sh/uv) (a fast Python package installer and virtual environment manager)
42
 
43
  ### Installation
44
 
45
+ 1. **Create a virtual environment using uv:**
46
 
47
  ```bash
48
+ uv venv
49
  ```
50
 
51
+ 2. **Activate the virtual environment:**
52
 
53
+ ```bash
54
+ source .venv/bin/activate
55
+ ```
56
+
57
+ 3. **Install dependencies:**
58
+
59
+ ```bash
60
+ uv sync
61
+ ```
62
+
63
+ 4. **Setup Modal**
64
+
65
+ ```bash
66
+ modal setup
67
+ ```
68
+
69
+ ### Running the App
70
 
71
  ```bash
72
+ uv run src/app.py
73
  ```
74
 
75
+ This will launch the Gradio interface for ImageAlfred.
76
+
77
  ## License
78
 
79
  This project is licensed under the terms of the LICENSE file in this repository.
claude_desktop_config.json DELETED
@@ -1,22 +0,0 @@
1
- {
2
- "mcpServers": {
3
- "Image Alfred": {
4
- "command": "npx",
5
- "args": [
6
- "mcp-remote",
7
- "https://agents-mcp-hackathon-imagealfred.hf.space/gradio_api/mcp/sse",
8
- "--transport",
9
- "sse-only"
10
- ]
11
- },
12
- "local Image Alfred": {
13
- "command": "npx",
14
- "args": [
15
- "mcp-remote",
16
- "http://127.0.0.1:7860/gradio_api/mcp/sse",
17
- "--transport",
18
- "sse-only"
19
- ]
20
- }
21
- }
22
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf.sh CHANGED
@@ -5,6 +5,7 @@ REPO_URL="https://github.com/mahan-ym/ImageAlfred"
5
  REPO_DIR="ImageAlfred"
6
  TEMP_DIR="./tmp"
7
  SRC_DIR="src"
 
8
 
9
  echo "🚀 Starting Huggingface Space update script..."
10
 
@@ -31,21 +32,6 @@ if [ -d "$SRC_DIR" ]; then
31
  fi
32
  cp -r "$TEMP_DIR/$REPO_DIR/$SRC_DIR" .
33
  mv "$TEMP_DIR/$REPO_DIR/Makefile" .
34
- mv "$TEMP_DIR/$REPO_DIR/requirements.txt" .
35
- mv "$TEMP_DIR/$REPO_DIR/pyproject.toml" .
36
- mv "$TEMP_DIR/$REPO_DIR/uv.lock" .
37
- mv "$TEMP_DIR/$REPO_DIR/claude_desktop_config.json" .
38
- mv "$TEMP_DIR/$REPO_DIR/LICENSE" .
39
-
40
- # Concatenate README files
41
- echo "📄 Creating combined README file..."
42
- if [ -f "$TEMP_DIR/$REPO_DIR/hf_readme.md" ] && [ -f "$TEMP_DIR/$REPO_DIR/README.md" ]; then
43
- cat "$TEMP_DIR/$REPO_DIR/hf_readme.md" "$TEMP_DIR/$REPO_DIR/README.md" > README.md
44
- echo "✅ Combined README created successfully!"
45
- else
46
- echo "⚠️ Could not find one or both README files for concatenation."
47
- fi
48
-
49
 
50
  # Check if copy was successful
51
  if [ $? -eq 0 ]; then
 
5
  REPO_DIR="ImageAlfred"
6
  TEMP_DIR="./tmp"
7
  SRC_DIR="src"
8
+ REQUIREMENTS_FILE="requirements.txt"
9
 
10
  echo "🚀 Starting Huggingface Space update script..."
11
 
 
32
  fi
33
  cp -r "$TEMP_DIR/$REPO_DIR/$SRC_DIR" .
34
  mv "$TEMP_DIR/$REPO_DIR/Makefile" .
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  # Check if copy was successful
37
  if [ $? -eq 0 ]; then
pyproject.toml CHANGED
@@ -9,6 +9,7 @@ requires-python = ">=3.12"
9
 
10
  dependencies = [
11
  "gradio[mcp]>=5.32.1",
 
12
  "modal>=1.0.2",
13
  "numpy>=2.2.6",
14
  "pillow>=11.2.1",
@@ -17,11 +18,8 @@ dependencies = [
17
  [dependency-groups]
18
  dev = [
19
  "jupyterlab>=4.4.3",
20
- "matplotlib>=3.10.3",
21
  "opencv-contrib-python>=4.11.0.86",
22
- "rapidfuzz>=3.13.0",
23
  "ruff>=0.11.12",
24
- "supervision>=0.25.1",
25
  ]
26
 
27
  [tool.ruff]
 
9
 
10
  dependencies = [
11
  "gradio[mcp]>=5.32.1",
12
+ "matplotlib>=3.10.3",
13
  "modal>=1.0.2",
14
  "numpy>=2.2.6",
15
  "pillow>=11.2.1",
 
18
  [dependency-groups]
19
  dev = [
20
  "jupyterlab>=4.4.3",
 
21
  "opencv-contrib-python>=4.11.0.86",
 
22
  "ruff>=0.11.12",
 
23
  ]
24
 
25
  [tool.ruff]
requirements.txt CHANGED
@@ -1,4 +1,4 @@
1
  gradio[mcp]
2
  modal
3
  numpy
4
- pillow
 
1
  gradio[mcp]
2
  modal
3
  numpy
4
+ pillow
src/app.py CHANGED
@@ -6,7 +6,6 @@ from tools import (
6
  change_color_objects_hsv,
7
  change_color_objects_lab,
8
  privacy_preserve_image,
9
- remove_background,
10
  )
11
 
12
  gr.set_static_paths(paths=[Path.cwd().absolute() / "assets"])
@@ -19,19 +18,20 @@ title = """Image Alfred - Recolor and Privacy Preserving Image MCP Tools
19
  """ # noqa: E501
20
 
21
  hsv_df_input = gr.Dataframe(
22
- headers=["Object", "Red", "Green", "Blue"],
23
- datatype=["str", "number", "number", "number"],
24
- col_count=(4, "fixed"),
25
  show_row_numbers=True,
26
- label="Target Objects and Their new RGB Colors",
27
  type="array",
 
28
  )
29
 
30
  lab_df_input = gr.Dataframe(
31
  headers=["Object", "New A", "New B"],
32
  datatype=["str", "number", "number"],
33
- col_count=(3, "fixed"),
34
- label="Target Objects and New Settings.(0-255 -- 128 = Neutral)",
35
  type="array",
36
  )
37
 
@@ -45,20 +45,19 @@ change_color_objects_hsv_tool = gr.Interface(
45
  title="Image Recolor Tool (HSV)",
46
  description="""
47
  This tool allows you to recolor objects in an image using the HSV color space.
48
- You can specify the RGB values for each object.""", # noqa: E501
49
  examples=[
50
  [
51
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_1.jpg",
52
- [
53
- ["pants", 255, 178, 102],
54
- ],
 
 
55
  ],
56
  [
57
- "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_8.jpg",
58
- [
59
- ["pants", 114, 117, 34],
60
- ["shirt", 51, 51, 37],
61
- ],
62
  ],
63
  ],
64
  )
@@ -78,15 +77,15 @@ change_color_objects_lab_tool = gr.Interface(
78
  examples=[
79
  [
80
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_1.jpg",
81
- [["pants", 112, 128]],
82
  ],
83
  [
84
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_4.jpg",
85
- [["desk", 166, 193]],
86
  ],
87
  [
88
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_5.jpg",
89
- [["suits coat", 110, 133]],
90
  ],
91
  ],
92
  )
@@ -107,14 +106,6 @@ privacy_preserve_tool = gr.Interface(
107
  step=1,
108
  info="Higher values result in stronger blurring.",
109
  ),
110
- gr.Slider(
111
- label="Detection Threshold",
112
- minimum=0.01,
113
- maximum=0.99,
114
- value=0.2,
115
- step=0.01,
116
- info="Model threshold for detecting objects.",
117
- ),
118
  ],
119
  outputs=gr.Image(label="Output Image"),
120
  title="Privacy Preserving Tool",
@@ -122,59 +113,19 @@ privacy_preserve_tool = gr.Interface(
122
  examples=[
123
  [
124
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_3.jpg",
125
- "license plate",
126
  10,
127
- 0.5,
128
- ],
129
- [
130
- "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_8.jpg",
131
- "face",
132
- 15,
133
- 0.1,
134
- ],
135
- [
136
- "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_6.jpg",
137
- "face",
138
- 20,
139
- 0.1,
140
- ],
141
- ],
142
- )
143
-
144
- remove_background_tool = gr.Interface(
145
- fn=remove_background,
146
- inputs=[
147
- gr.Image(label="Input Image", type="pil"),
148
- ],
149
- outputs=gr.Image(label="Output Image"),
150
- title="Remove Image Background Tool",
151
- description="Upload an image to remove the background.",
152
- examples=[
153
- [
154
- "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_5.jpg",
155
- ],
156
- [
157
- "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_6.jpg",
158
- ],
159
- [
160
- "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_8.jpg",
161
  ],
162
  ],
163
  )
164
 
165
  demo = gr.TabbedInterface(
166
  [
167
- privacy_preserve_tool,
168
- remove_background_tool,
169
  change_color_objects_hsv_tool,
170
  change_color_objects_lab_tool,
 
171
  ],
172
- [
173
- "Privacy Preserving Tool",
174
- "Remove Background Tool",
175
- "Change Color Objects HSV",
176
- "Change Color Objects LAB",
177
- ],
178
  title=title,
179
  theme=gr.themes.Default(
180
  primary_hue="blue",
 
6
  change_color_objects_hsv,
7
  change_color_objects_lab,
8
  privacy_preserve_image,
 
9
  )
10
 
11
  gr.set_static_paths(paths=[Path.cwd().absolute() / "assets"])
 
18
  """ # noqa: E501
19
 
20
  hsv_df_input = gr.Dataframe(
21
+ headers=["Object", "Hue", "Saturation Scale"],
22
+ datatype=["str", "number", "number"],
23
+ col_count=(3, "fixed"),
24
  show_row_numbers=True,
25
+ label="Target Objects and New Settings",
26
  type="array",
27
+ # row_count=(1, "dynamic"),
28
  )
29
 
30
  lab_df_input = gr.Dataframe(
31
  headers=["Object", "New A", "New B"],
32
  datatype=["str", "number", "number"],
33
+ col_count=(3,"fixed"),
34
+ label="Target Objects and New Settings",
35
  type="array",
36
  )
37
 
 
45
  title="Image Recolor Tool (HSV)",
46
  description="""
47
  This tool allows you to recolor objects in an image using the HSV color space.
48
+ You can specify the hue and saturation scale for each object.""", # noqa: E501
49
  examples=[
50
  [
51
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_1.jpg",
52
+ [["pants", 128, 1]],
53
+ ],
54
+ [
55
+ "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_4.jpg",
56
+ [["desk", 15, 0.5], ["left cup", 40, 1.1]],
57
  ],
58
  [
59
+ "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_5.jpg",
60
+ [["suits", 60, 1.5], ["pants", 10, 0.8]],
 
 
 
61
  ],
62
  ],
63
  )
 
77
  examples=[
78
  [
79
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_1.jpg",
80
+ [["pants", 128, 1]],
81
  ],
82
  [
83
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_4.jpg",
84
+ [["desk", 15, 0.5], ["left cup", 40, 1.1]],
85
  ],
86
  [
87
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_5.jpg",
88
+ [["suits", 60, 1.5], ["pants", 10, 0.8]],
89
  ],
90
  ],
91
  )
 
106
  step=1,
107
  info="Higher values result in stronger blurring.",
108
  ),
 
 
 
 
 
 
 
 
109
  ],
110
  outputs=gr.Image(label="Output Image"),
111
  title="Privacy Preserving Tool",
 
113
  examples=[
114
  [
115
  "https://raw.githubusercontent.com/mahan-ym/ImageAlfred/main/src/assets/examples/test_3.jpg",
116
+ "license plate.",
117
  10,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
118
  ],
119
  ],
120
  )
121
 
122
  demo = gr.TabbedInterface(
123
  [
 
 
124
  change_color_objects_hsv_tool,
125
  change_color_objects_lab_tool,
126
+ privacy_preserve_tool,
127
  ],
128
+ ["Change Color Objects HSV", "Change Color Objects LAB", "Privacy Preserving Tool"],
 
 
 
 
 
129
  title=title,
130
  theme=gr.themes.Default(
131
  primary_hue="blue",
src/assets/examples/test_6.jpg DELETED

Git LFS Details

  • SHA256: c07eebe3188403b130a467f0e96ca72503f7498649d4101752d94bf4c9294635
  • Pointer size: 133 Bytes
  • Size of remote file: 10.5 MB
src/assets/examples/test_7.jpg DELETED

Git LFS Details

  • SHA256: 1ab95b5752d51f55bf4d774b4bd028e66897b4ca2b1c459f73014cca949d0945
  • Pointer size: 132 Bytes
  • Size of remote file: 1.47 MB
src/assets/examples/test_8.jpg DELETED

Git LFS Details

  • SHA256: a28b42702894d8ebc72c7f80b1ee218cbe2f0a4db9b554f70b07ef3aa823ffb4
  • Pointer size: 132 Bytes
  • Size of remote file: 1.22 MB
src/assets/icons/hf-logo.svg DELETED
src/assets/icons/python-logo-only.svg DELETED
src/assets/vid/demo.mp4 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:249a630becf81774a3cc86bf35858c648c7a673a4e8c2854b4a92fb02eb7c01f
3
- size 3678354
 
 
 
 
src/modal_app.py CHANGED
@@ -29,17 +29,13 @@ image = (
29
  "TORCH_HOME": TORCH_HOME,
30
  }
31
  )
32
- .apt_install(
33
- "git",
34
- )
35
  .pip_install(
36
  "huggingface-hub",
37
  "hf_transfer",
38
  "Pillow",
39
  "numpy",
40
- "transformers",
41
  "opencv-contrib-python-headless",
42
- "scipy",
43
  gpu="A10G",
44
  )
45
  .pip_install(
@@ -48,284 +44,52 @@ image = (
48
  index_url="https://download.pytorch.org/whl/cu124",
49
  gpu="A10G",
50
  )
51
- .pip_install("git+https://github.com/openai/CLIP.git", gpu="A10G")
52
- .pip_install("git+https://github.com/facebookresearch/sam2.git", gpu="A10G")
53
  .pip_install(
54
- "git+https://github.com/PramaLLC/BEN2.git#egg=ben2",
55
  gpu="A10G",
56
  )
57
  )
58
 
59
 
60
  @app.function(
61
- image=image,
62
- gpu="A10G",
63
- volumes={volume_path: volume},
64
- timeout=60 * 3,
65
- )
66
- def prompt_segment(
67
- image_pil: Image.Image,
68
- prompts: list[str],
69
- ) -> list[dict]:
70
- clip_results = clip.remote(image_pil, prompts)
71
-
72
- if not clip_results:
73
- print("No boxes returned from CLIP.")
74
- return None
75
-
76
- boxes = np.array(clip_results["boxes"])
77
-
78
- sam_result_masks, sam_result_scores = sam2.remote(image_pil=image_pil, boxes=boxes)
79
-
80
- print(f"sam_result_mask {sam_result_masks}")
81
-
82
- if not sam_result_masks.any():
83
- print("No masks or scores returned from SAM2.")
84
- return None
85
-
86
- if sam_result_masks.ndim == 3:
87
- # If the masks are in 3D, we need to convert them to 4D
88
- sam_result_masks = [sam_result_masks]
89
-
90
- results = {
91
- "labels": clip_results["labels"],
92
- "boxes": boxes,
93
- "clip_scores": clip_results["scores"],
94
- "sam_masking_scores": sam_result_scores,
95
- "masks": sam_result_masks,
96
- }
97
- return results
98
-
99
-
100
- @app.function(
101
- image=image,
102
  gpu="A10G",
103
- volumes={volume_path: volume},
104
- timeout=60 * 3,
105
- )
106
- def privacy_prompt_segment(
107
- image_pil: Image.Image,
108
- prompts: list[str],
109
- threshold: float,
110
- ) -> list[dict]:
111
- owlv2_results = owlv2.remote(image_pil, prompts, threshold=threshold)
112
-
113
- if not owlv2_results:
114
- print("No boxes returned from OWLV2.")
115
- return None
116
-
117
- boxes = np.array(owlv2_results["boxes"])
118
-
119
- sam_result_masks, sam_result_scores = sam2.remote(image_pil=image_pil, boxes=boxes)
120
-
121
- print(f"sam_result_mask {sam_result_masks}")
122
-
123
- if not sam_result_masks.any():
124
- print("No masks or scores returned from SAM2.")
125
- return None
126
-
127
- if sam_result_masks.ndim == 3:
128
- # If the masks are in 3D, we need to convert them to 4D
129
- sam_result_masks = [sam_result_masks]
130
-
131
- results = {
132
- "labels": owlv2_results["labels"],
133
- "boxes": boxes,
134
- "owlv2_scores": owlv2_results["scores"],
135
- "sam_masking_scores": sam_result_scores,
136
- "masks": sam_result_masks,
137
- }
138
- return results
139
-
140
-
141
- @app.function(
142
- image=image,
143
- gpu="A100",
144
- volumes={volume_path: volume},
145
- timeout=60 * 3,
146
- )
147
- def sam2(image_pil: Image.Image, boxes: list[np.ndarray]) -> list[dict]:
148
- import torch
149
- from sam2.sam2_image_predictor import SAM2ImagePredictor
150
-
151
- predictor = SAM2ImagePredictor.from_pretrained("facebook/sam2-hiera-large")
152
-
153
- with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
154
- predictor.set_image(image_pil)
155
- masks, scores, _ = predictor.predict(
156
- point_coords=None,
157
- point_labels=None,
158
- box=boxes,
159
- multimask_output=False,
160
- )
161
- return masks, scores
162
-
163
-
164
- @app.function(
165
- image=image,
166
- gpu="A100",
167
- volumes={volume_path: volume},
168
- )
169
- def owlv2(
170
- image_pil: Image.Image,
171
- labels: list[str],
172
- threshold: float,
173
- ) -> list[dict]:
174
- """
175
- Perform zero-shot segmentation on an image using specified labels.
176
- Args:
177
- image_pil (Image.Image): The input image as a PIL Image.
178
- labels (list[str]): List of labels for zero-shot segmentation.
179
-
180
- Returns:
181
- list[dict]: List of dictionaries containing label and bounding box information.
182
- """
183
- from transformers import pipeline
184
-
185
- checkpoint = "google/owlv2-large-patch14-ensemble"
186
- detector = pipeline(
187
- model=checkpoint,
188
- task="zero-shot-object-detection",
189
- device="cuda",
190
- use_fast=True,
191
- )
192
- # Load the image
193
- predictions = detector(
194
- image_pil,
195
- candidate_labels=labels,
196
- )
197
- labels = []
198
- scores = []
199
- boxes = []
200
- for prediction in predictions:
201
- if prediction["score"] < threshold:
202
- continue
203
- labels.append(prediction["label"])
204
- scores.append(prediction["score"])
205
- boxes.append(np.array(list(prediction["box"].values())))
206
- if labels == []:
207
- print("No predictions found with score above threshold.")
208
- return None
209
- predictions = {"labels": labels, "scores": scores, "boxes": boxes}
210
- return predictions
211
-
212
-
213
- @app.function(
214
  image=image,
215
- gpu="A100",
216
  volumes={volume_path: volume},
 
217
  timeout=60 * 3,
218
  )
219
- def clip(
220
  image_pil: Image.Image,
221
- prompts: list[str],
222
- ) -> list[dict]:
223
- """
224
- returns:
225
- dict with keys each are lists:
226
- - labels: str, the prompt used for the prediction
227
- - scores: float, confidence score of the prediction
228
- - boxes: np.array representing bounding box coordinates
229
- """
230
-
231
- from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
232
- import torch
233
-
234
- processor = CLIPSegProcessor.from_pretrained(
235
- "CIDAS/clipseg-rd64-refined",
236
- use_fast=True,
237
- )
238
- model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
239
-
240
- # Get original image dimensions
241
- orig_width, orig_height = image_pil.size
242
-
243
- inputs = processor(
244
- text=prompts,
245
- images=[image_pil] * len(prompts),
246
- padding="max_length",
247
- return_tensors="pt",
248
  )
249
- # predict
250
- with torch.no_grad():
251
- outputs = model(**inputs)
252
- preds = outputs.logits.unsqueeze(1)
253
-
254
- # Get the dimensions of the prediction output
255
- pred_height, pred_width = preds.shape[-2:]
256
-
257
- # Calculate scaling factors
258
- width_scale = orig_width / pred_width
259
- height_scale = orig_height / pred_height
260
-
261
- labels = []
262
- scores = []
263
- boxes = []
264
-
265
- # Process each prediction to find bounding boxes in high probability regions
266
- for i, prompt in enumerate(prompts):
267
- # Apply sigmoid to get probability map
268
- pred_tensor = torch.sigmoid(preds[i][0])
269
- # Convert tensor to numpy array
270
- pred_np = pred_tensor.cpu().numpy()
271
-
272
- # Convert to uint8 for OpenCV processing
273
- heatmap = (pred_np * 255).astype(np.uint8)
274
-
275
- # Apply threshold to find high probability regions
276
- _, binary = cv2.threshold(heatmap, 127, 255, cv2.THRESH_BINARY)
277
-
278
- # Find contours in thresholded image
279
- contours, _ = cv2.findContours(
280
- binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
281
- )
282
-
283
- # Process each contour to get bounding boxes
284
- for contour in contours:
285
- # Skip very small contours that might be noise
286
- if cv2.contourArea(contour) < 100: # Minimum area threshold
287
- continue
288
-
289
- # Get bounding box coordinates in prediction space
290
- x, y, w, h = cv2.boundingRect(contour)
291
-
292
- # Scale coordinates to original image dimensions
293
- x_orig = int(x * width_scale)
294
- y_orig = int(y * height_scale)
295
- w_orig = int(w * width_scale)
296
- h_orig = int(h * height_scale)
297
-
298
- # Calculate confidence score based on average probability in the region
299
- mask = np.zeros_like(pred_np)
300
- cv2.drawContours(mask, [contour], 0, 1, -1)
301
- confidence = float(np.mean(pred_np[mask == 1]))
302
-
303
- labels.append(prompt)
304
- scores.append(confidence)
305
- boxes.append(
306
- np.array(
307
- [
308
- x_orig,
309
- y_orig,
310
- x_orig + w_orig,
311
- y_orig + h_orig,
312
- ]
313
- )
314
- )
315
-
316
- if labels == []:
317
  return None
 
 
 
 
 
318
 
319
- results = {
320
- "labels": labels,
321
- "scores": scores,
322
- "boxes": boxes,
323
- }
324
- return results
325
 
326
 
327
  @app.function(
328
- gpu="A10G",
329
  image=image,
330
  volumes={volume_path: volume},
331
  timeout=60 * 3,
@@ -334,20 +98,19 @@ def change_image_objects_hsv(
334
  image_pil: Image.Image,
335
  targets_config: list[list[str | int | float]],
336
  ) -> Image.Image:
 
 
 
 
337
  if not isinstance(targets_config, list) or not all(
338
  (
339
  isinstance(target, list)
340
- and len(target) == 4
341
  and isinstance(target[0], str)
342
- and isinstance(target[1], (int))
343
- and isinstance(target[2], (int))
344
- and isinstance(target[3], (int))
345
- and target[1] >= 0
346
- and target[1] <= 255
347
  and target[2] >= 0
348
- and target[2] <= 255
349
- and target[3] >= 0
350
- and target[3] <= 255
351
  )
352
  for target in targets_config
353
  ):
@@ -355,66 +118,38 @@ def change_image_objects_hsv(
355
  "targets_config must be a list of lists, each containing [target_name, hue, saturation_scale]." # noqa: E501
356
  )
357
  print("Change image objects hsv targets config:", targets_config)
358
- prompts = [target[0].strip() for target in targets_config]
359
 
360
- prompt_segment_results = prompt_segment.remote(
361
- image_pil=image_pil,
362
- prompts=prompts,
363
- )
364
- if not prompt_segment_results:
365
  return image_pil
366
 
367
- output_labels = prompt_segment_results["labels"]
 
368
 
369
  img_array = np.array(image_pil)
370
  img_hsv = cv2.cvtColor(img_array, cv2.COLOR_RGB2HSV).astype(np.float32)
371
 
372
- for idx, label in enumerate(output_labels):
373
- if not label or label == "":
374
- print("Skipping empty label.")
375
- continue
376
- if label not in prompts:
377
- print(f"Label '{label}' not found in prompts. Skipping.")
 
 
 
 
 
378
  continue
379
- input_label_idx = prompts.index(label)
380
- target_rgb = targets_config[input_label_idx][1:]
381
- target_hsv = cv2.cvtColor(np.uint8([[target_rgb]]), cv2.COLOR_RGB2HSV)[0][0]
382
-
383
- mask = prompt_segment_results["masks"][idx][0].astype(bool)
384
- h, s, v = cv2.split(img_hsv)
385
- # Convert all channels to float32 for consistent processing
386
- h = h.astype(np.float32)
387
- s = s.astype(np.float32)
388
- v = v.astype(np.float32)
389
-
390
- # Compute original S and V means inside the mask
391
- mean_s = np.mean(s[mask])
392
- mean_v = np.mean(v[mask])
393
-
394
- # Target S and V
395
- target_hue, target_s, target_v = target_hsv
396
-
397
- # Compute scaling factors (avoid div by zero)
398
- scale_s = target_s / mean_s if mean_s > 0 else 1.0
399
- scale_v = target_v / mean_v if mean_v > 0 else 1.0
400
-
401
- scale_s = np.clip(scale_s, 0.8, 1.2)
402
- scale_v = np.clip(scale_v, 0.8, 1.2)
403
-
404
- # Apply changes only in mask
405
- h[mask] = target_hue
406
- s = s.astype(np.float32)
407
- v = v.astype(np.float32)
408
- s[mask] = np.clip(s[mask] * scale_s, 0, 255)
409
- v[mask] = np.clip(v[mask] * scale_v, 0, 255)
410
-
411
- # Merge and convert back
412
- img_hsv = cv2.merge(
413
- [
414
- h.astype(np.uint8),
415
- s.astype(np.uint8),
416
- v.astype(np.uint8),
417
- ]
418
  )
419
 
420
  output_img = cv2.cvtColor(img_hsv.astype(np.uint8), cv2.COLOR_HSV2RGB)
@@ -423,7 +158,7 @@ def change_image_objects_hsv(
423
 
424
 
425
  @app.function(
426
- gpu="A10G",
427
  image=image,
428
  volumes={volume_path: volume},
429
  timeout=60 * 3,
@@ -454,35 +189,33 @@ def change_image_objects_lab(
454
 
455
  print("change image objects lab targets config:", targets_config)
456
 
457
- prompts = [target[0].strip() for target in targets_config]
458
 
459
- prompt_segment_results = prompt_segment.remote(
460
  image_pil=image_pil,
461
- prompts=prompts,
462
  )
463
- if not prompt_segment_results:
464
  return image_pil
465
 
466
- output_labels = prompt_segment_results["labels"]
467
-
468
  img_array = np.array(image_pil)
469
  img_lab = cv2.cvtColor(img_array, cv2.COLOR_RGB2Lab).astype(np.float32)
470
-
471
- for idx, label in enumerate(output_labels):
472
- if not label or label == "":
473
- print("Skipping empty label.")
474
- continue
475
-
476
- if label not in prompts:
477
- print(f"Label '{label}' not found in prompts. Skipping.")
 
 
 
478
  continue
479
 
480
- input_label_idx = prompts.index(label)
481
-
482
- new_a = targets_config[input_label_idx][1]
483
- new_b = targets_config[input_label_idx][2]
484
-
485
- mask = prompt_segment_results["masks"][idx][0]
486
  mask_bool = mask.astype(bool)
487
 
488
  img_lab[mask_bool, 1] = new_a
@@ -495,7 +228,7 @@ def change_image_objects_lab(
495
 
496
 
497
  @app.function(
498
- gpu="A10G",
499
  image=image,
500
  volumes={volume_path: volume},
501
  timeout=60 * 3,
@@ -523,80 +256,57 @@ def apply_mosaic_with_bool_mask(
523
 
524
 
525
  @app.function(
526
- gpu="A10G",
527
  image=image,
528
  volumes={volume_path: volume},
529
  timeout=60 * 3,
530
  )
531
  def preserve_privacy(
532
  image_pil: Image.Image,
533
- prompts: list[str],
534
  privacy_strength: int = 15,
535
- threshold: float = 0.2,
536
  ) -> Image.Image:
537
  """
538
  Preserves privacy in an image by applying a mosaic effect to specified objects.
539
  """
540
- print(f"Preserving privacy for prompt: {prompts} with strength {privacy_strength}")
541
- if isinstance(prompts, str):
542
- prompts = [prompt.strip() for prompt in prompts.split(".")]
543
- print(f"Parsed prompts: {prompts}")
544
- prompt_segment_results = privacy_prompt_segment.remote(
545
  image_pil=image_pil,
546
- prompts=prompts,
547
- threshold=threshold,
 
548
  )
549
- if not prompt_segment_results:
550
  return image_pil
551
 
552
  img_array = np.array(image_pil)
553
 
554
- for i, mask in enumerate(prompt_segment_results["masks"]):
555
- mask_bool = mask[0].astype(bool)
556
-
557
- # Create kernel for morphological operations
558
- kernel_size = 100
559
- kernel = np.ones((kernel_size, kernel_size), np.uint8)
560
-
561
- # Convert bool mask to uint8 for OpenCV operations
562
- mask_uint8 = mask_bool.astype(np.uint8) * 255
563
-
564
- # Apply dilation to slightly expand the mask area
565
- mask_uint8 = cv2.dilate(mask_uint8, kernel, iterations=2)
566
- # Optional: Apply erosion again to refine the mask
567
- mask_uint8 = cv2.erode(mask_uint8, kernel, iterations=2)
 
 
 
 
568
 
569
- # Convert back to boolean mask
570
- mask_bool = mask_uint8 > 127
571
 
572
- img_array = apply_mosaic_with_bool_mask.remote(
573
- img_array, mask_bool, privacy_strength
574
- )
575
 
576
  output_image_pil = Image.fromarray(img_array)
577
 
578
  return output_image_pil
579
-
580
-
581
- @app.function(
582
- gpu="A10G",
583
- image=image,
584
- volumes={volume_path: volume},
585
- timeout=60 * 2,
586
- )
587
- def remove_background(image_pil: Image.Image) -> Image.Image:
588
- import torch # type: ignore
589
- from ben2 import BEN_Base # type: ignore
590
-
591
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
592
- print(f"Using device: {device}")
593
- print("type of image_pil:", type(image_pil))
594
- model = BEN_Base.from_pretrained("PramaLLC/BEN2")
595
- model.to(device).eval() # todo check if this should be outside the function
596
-
597
- output_image = model.inference(
598
- image_pil,
599
- refine_foreground=True,
600
- )
601
- print(f"output type: {type(output_image)}")
602
- return output_image
 
29
  "TORCH_HOME": TORCH_HOME,
30
  }
31
  )
32
+ .apt_install("git")
 
 
33
  .pip_install(
34
  "huggingface-hub",
35
  "hf_transfer",
36
  "Pillow",
37
  "numpy",
 
38
  "opencv-contrib-python-headless",
 
39
  gpu="A10G",
40
  )
41
  .pip_install(
 
44
  index_url="https://download.pytorch.org/whl/cu124",
45
  gpu="A10G",
46
  )
 
 
47
  .pip_install(
48
+ "git+https://github.com/luca-medeiros/lang-segment-anything.git",
49
  gpu="A10G",
50
  )
51
  )
52
 
53
 
54
  @app.function(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  gpu="A10G",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  image=image,
 
57
  volumes={volume_path: volume},
58
+ # min_containers=1,
59
  timeout=60 * 3,
60
  )
61
+ def lang_sam_segment(
62
  image_pil: Image.Image,
63
+ prompt: str,
64
+ box_threshold=0.3,
65
+ text_threshold=0.25,
66
+ ) -> list:
67
+ """Segments an image using LangSAM based on a text prompt.
68
+ This function uses LangSAM to segment objects in the image based on the provided prompt.
69
+ """ # noqa: E501
70
+ from lang_sam import LangSAM # type: ignore
71
+
72
+ model = LangSAM(sam_type="sam2.1_hiera_large")
73
+ langsam_results = model.predict(
74
+ images_pil=[image_pil],
75
+ texts_prompt=[prompt],
76
+ box_threshold=box_threshold,
77
+ text_threshold=text_threshold,
 
 
 
 
 
 
 
 
 
 
 
 
78
  )
79
+ if len(langsam_results[0]["labels"]) == 0:
80
+ print("No masks found for the given prompt.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  return None
82
+
83
+ print(f"found {len(langsam_results[0]['labels'])} masks for prompt: {prompt}")
84
+ print("labels:", langsam_results[0]["labels"])
85
+ print("scores:", langsam_results[0]["scores"])
86
+ print("masks scores:", langsam_results[0].get("mask_scores", "No mask scores available")) # noqa: E501
87
 
88
+ return langsam_results
 
 
 
 
 
89
 
90
 
91
  @app.function(
92
+ gpu="T4",
93
  image=image,
94
  volumes={volume_path: volume},
95
  timeout=60 * 3,
 
98
  image_pil: Image.Image,
99
  targets_config: list[list[str | int | float]],
100
  ) -> Image.Image:
101
+ """Changes the hue and saturation of specified objects in an image.
102
+ This function uses LangSAM to segment objects in the image based on provided prompts,
103
+ and then modifies the hue and saturation of those objects in the HSV color space.
104
+ """ # noqa: E501
105
  if not isinstance(targets_config, list) or not all(
106
  (
107
  isinstance(target, list)
108
+ and len(target) == 3
109
  and isinstance(target[0], str)
110
+ and isinstance(target[1], (int, float))
111
+ and isinstance(target[2], (int, float))
112
+ and 0 <= target[1] <= 179
 
 
113
  and target[2] >= 0
 
 
 
114
  )
115
  for target in targets_config
116
  ):
 
118
  "targets_config must be a list of lists, each containing [target_name, hue, saturation_scale]." # noqa: E501
119
  )
120
  print("Change image objects hsv targets config:", targets_config)
121
+ prompts = ". ".join(target[0] for target in targets_config)
122
 
123
+ langsam_results = lang_sam_segment.remote(image_pil=image_pil, prompt=prompts)
124
+ if not langsam_results:
 
 
 
125
  return image_pil
126
 
127
+ labels = langsam_results[0]["labels"]
128
+ scores = langsam_results[0]["scores"]
129
 
130
  img_array = np.array(image_pil)
131
  img_hsv = cv2.cvtColor(img_array, cv2.COLOR_RGB2HSV).astype(np.float32)
132
 
133
+ for target_spec in targets_config:
134
+ target_obj = target_spec[0]
135
+ hue = target_spec[1]
136
+ saturation_scale = target_spec[2]
137
+
138
+ try:
139
+ mask_idx = labels.index(target_obj)
140
+ except ValueError:
141
+ print(
142
+ f"Warning: Label '{target_obj}' not found in the image. Skipping this target." # noqa: E501
143
+ )
144
  continue
145
+
146
+ mask = langsam_results[0]["masks"][mask_idx]
147
+ mask_bool = mask.astype(bool)
148
+
149
+ img_hsv[mask_bool, 0] = float(hue)
150
+ img_hsv[mask_bool, 1] = np.minimum(
151
+ img_hsv[mask_bool, 1] * saturation_scale,
152
+ 255.0,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  )
154
 
155
  output_img = cv2.cvtColor(img_hsv.astype(np.uint8), cv2.COLOR_HSV2RGB)
 
158
 
159
 
160
  @app.function(
161
+ gpu="T4",
162
  image=image,
163
  volumes={volume_path: volume},
164
  timeout=60 * 3,
 
189
 
190
  print("change image objects lab targets config:", targets_config)
191
 
192
+ prompts = ". ".join(target[0] for target in targets_config)
193
 
194
+ langsam_results = lang_sam_segment.remote(
195
  image_pil=image_pil,
196
+ prompt=prompts,
197
  )
198
+ if not langsam_results:
199
  return image_pil
200
 
201
+ labels = langsam_results[0]["labels"]
202
+ scores = langsam_results[0]["scores"]
203
  img_array = np.array(image_pil)
204
  img_lab = cv2.cvtColor(img_array, cv2.COLOR_RGB2Lab).astype(np.float32)
205
+ for target_spec in targets_config:
206
+ target_obj = target_spec[0]
207
+ new_a = target_spec[1]
208
+ new_b = target_spec[2]
209
+
210
+ try:
211
+ mask_idx = labels.index(target_obj)
212
+ except ValueError:
213
+ print(
214
+ f"Warning: Label '{target_obj}' not found in the image. Skipping this target." # noqa: E501
215
+ )
216
  continue
217
 
218
+ mask = langsam_results[0]["masks"][mask_idx]
 
 
 
 
 
219
  mask_bool = mask.astype(bool)
220
 
221
  img_lab[mask_bool, 1] = new_a
 
228
 
229
 
230
  @app.function(
231
+ gpu="T4",
232
  image=image,
233
  volumes={volume_path: volume},
234
  timeout=60 * 3,
 
256
 
257
 
258
  @app.function(
259
+ gpu="T4",
260
  image=image,
261
  volumes={volume_path: volume},
262
  timeout=60 * 3,
263
  )
264
  def preserve_privacy(
265
  image_pil: Image.Image,
266
+ prompt: str,
267
  privacy_strength: int = 15,
 
268
  ) -> Image.Image:
269
  """
270
  Preserves privacy in an image by applying a mosaic effect to specified objects.
271
  """
272
+ print(f"Preserving privacy for prompt: {prompt} with strength {privacy_strength}")
273
+
274
+ langsam_results = lang_sam_segment.remote(
 
 
275
  image_pil=image_pil,
276
+ prompt=prompt,
277
+ box_threshold=0.35,
278
+ text_threshold=0.40,
279
  )
280
+ if not langsam_results:
281
  return image_pil
282
 
283
  img_array = np.array(image_pil)
284
 
285
+ for result in langsam_results:
286
+ print(f"result: {result}")
287
+
288
+ for i, mask in enumerate(result["masks"]):
289
+ if "mask_scores" in result:
290
+ if (
291
+ hasattr(result["mask_scores"], "shape")
292
+ and result["mask_scores"].ndim > 0
293
+ ):
294
+ mask_score = result["mask_scores"][i]
295
+ else:
296
+ mask_score = result["mask_scores"]
297
+ if mask_score < 0.6:
298
+ print(f"Skipping mask {i + 1}/{len(result['masks'])} -> low score.")
299
+ continue
300
+ print(
301
+ f"Processing mask {i + 1}/{len(result['masks'])} Mask score: {mask_score}" # noqa: E501
302
+ )
303
 
304
+ mask_bool = mask.astype(bool)
 
305
 
306
+ img_array = apply_mosaic_with_bool_mask.remote(
307
+ img_array, mask_bool, privacy_strength
308
+ )
309
 
310
  output_image_pil = Image.fromarray(img_array)
311
 
312
  return output_image_pil
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/tools.py CHANGED
@@ -9,40 +9,10 @@ from PIL import Image
9
  modal_app_name = "ImageAlfred"
10
 
11
 
12
- def remove_background(
13
- input_img,
14
- ) -> np.ndarray | Image.Image | str | Path | None:
15
- """
16
- Remove the background of the image.
17
-
18
- Args:
19
- input_img: Input image or can be URL string of the image or base64 string. Cannot be None.
20
- Returns:
21
- bytes: Binary image data of the modified image.
22
- """ # noqa: E501
23
- if not input_img:
24
- raise gr.Error("Input image cannot be None or empty.")
25
-
26
- func = modal.Function.from_name(modal_app_name, "remove_background")
27
- output_pil = func.remote(
28
- image_pil=input_img,
29
- )
30
-
31
- if output_pil is None:
32
- raise gr.Error("Received None from server.")
33
- if not isinstance(output_pil, Image.Image):
34
- raise gr.Error(
35
- f"Expected Image.Image from server function, got {type(output_pil)}"
36
- )
37
-
38
- return output_pil
39
-
40
-
41
  def privacy_preserve_image(
42
  input_img,
43
  input_prompt,
44
  privacy_strength: int = 15,
45
- threshold: float = 0.2,
46
  ) -> np.ndarray | Image.Image | str | Path | None:
47
  """
48
  Obscures specified objects in the input image based on a natural language prompt, using a privacy-preserving blur or distortion effect.
@@ -52,30 +22,27 @@ def privacy_preserve_image(
52
 
53
  Args:
54
  input_img: Input image or can be URL string of the image or base64 string. Cannot be None.
55
- input_prompt (str): Object to obscure in the image has to be a dot-separated string. It can be a single word or multiple words, e.g., "left person face", "license plate" but it must be as short as possible and avoid using symbols or punctuation. e.g. input_prompt = "face. right car. blue shirt."
56
  privacy_strength (int): Strength of the privacy preservation effect. Higher values result in stronger blurring. Default is 15.
57
- threshold (float): Model threshold for detecting objects. It should be between 0.01 and 0.99. Default is 0.2. for detecting smaller objects, small regions or faces a lower threshold is recommended.
58
  Returns:
59
  bytes: Binary image data of the modified image.
60
 
61
  example:
62
- input_prompt = "faces, license plates, logos"
63
  """ # noqa: E501
64
  if not input_img:
65
  raise gr.Error("Input image cannot be None or empty.")
 
66
  if not input_prompt or input_prompt.strip() == "":
67
  raise gr.Error("Input prompt cannot be None or empty.")
68
- if threshold < 0.01 or threshold > 0.99:
69
- raise gr.Error("Threshold must be between 0.01 and 0.99.")
70
- if isinstance(input_prompt, str):
71
- prompts = [prompt.strip() for prompt in input_prompt.split(".")]
72
 
73
- func = modal.Function.from_name(modal_app_name, "preserve_privacy")
74
  output_pil = func.remote(
75
  image_pil=input_img,
76
- prompts=prompts,
77
  privacy_strength=privacy_strength,
78
- threshold=threshold,
79
  )
80
 
81
  if output_pil is None:
@@ -94,22 +61,36 @@ def change_color_objects_hsv(
94
  ) -> np.ndarray | Image.Image | str | Path | None:
95
  """
96
  Changes the hue and saturation of specified objects in an image using the HSV color space.
97
- This function segments image regions based on a user-provided text prompt and applies
98
- color transformations in the HSV color space. HSV separates chromatic content (hue) from
99
- intensity (value), making it more intuitive for color manipulation tasks.
 
 
 
100
  Use this method when:
101
- - You want to change the color of objects based on their hue and saturation.
102
- - You want to apply color transformations that are less influenced by lighting conditions or brightness variations.
 
 
 
 
 
 
 
 
 
 
 
103
 
104
  Args:
105
  input_img: Input image or can be URL string of the image or base64 string. Cannot be None.
106
- user_input : A list of target specifications for color transformation. Each inner list must contain exactly four elements in the following order: 1. target_object (str) - A short, human-readable description of the object to be modified. Multi-word, descriptions are allowed for disambiguation (e.g., "right person shirt"), but they must be concise and free of punctuation, symbols, or special characters.2. Red (int) - Desired red value in RGB color space from 0 to 255. 3. Green (int) - Desired green value in RGB color space from 0 to 255. 4. Blue (int) - Desired blue value in RGB color space from 0 to 255. Example: user_input = [["hair", 30, 55, 255], ["shirt", 70, 0 , 157]].
107
 
108
  Returns:
109
  Base64-encoded string.
110
 
111
  Raises:
112
- ValueError: If user_input format is invalid, or image format is invalid or corrupted.
113
  TypeError: If input_img is not a supported type or modal function returns unexpected type.
114
  """ # noqa: E501
115
  if len(user_input) == 0 or not isinstance(user_input, list):
@@ -118,13 +99,13 @@ def change_color_objects_hsv(
118
  )
119
  if not input_img:
120
  raise gr.Error("input img cannot be None or empty.")
121
-
122
  print("before processing input:", user_input)
123
  valid_pattern = re.compile(r"^[a-zA-Z\s]+$")
124
  for item in user_input:
125
- if len(item) != 4:
126
  raise gr.Error(
127
- "Each item in user_input must be a list of [object, red, green, blue]" # noqa: E501
128
  )
129
  if not item[0] or not valid_pattern.match(item[0]):
130
  raise gr.Error(
@@ -133,31 +114,28 @@ def change_color_objects_hsv(
133
 
134
  if not isinstance(item[0], str):
135
  item[0] = str(item[0])
136
-
137
- try:
138
- item[1] = int(item[1])
139
- except ValueError:
140
- raise gr.Error("Red must be an integer.")
141
- if item[1] < 0 or item[1] > 255:
142
- raise gr.Error("Red must be in the range [0, 255]")
143
-
144
- try:
145
- item[2] = int(item[2])
146
- except ValueError:
147
- raise gr.Error("Green must be an integer.")
148
- if item[2] < 0 or item[2] > 255:
149
- raise gr.Error("Green must be in the range [0, 255]")
150
-
151
- try:
152
- item[3] = int(item[3])
153
- except ValueError:
154
- raise gr.Error("Blue must be an integer.")
155
- if item[3] < 0 or item[3] > 255:
156
- raise gr.Error("Blue must be in the range [0, 255]")
157
 
158
  print("after processing input:", user_input)
159
 
160
- func = modal.Function.from_name(modal_app_name, "change_image_objects_hsv")
161
  output_pil = func.remote(image_pil=input_img, targets_config=user_input)
162
 
163
  if output_pil is None:
@@ -202,7 +180,7 @@ def change_color_objects_lab(
202
  - Purple: (L=?, A≈180, B≈100)
203
 
204
  Args:
205
- user_input: A list of color transformation instructions, each as a three-element list:[object_name (str), new_a (int, 0-255), new_b (int, 0-255)].- object_name: A short, unique identifier for the object to be recolored. Multi-word names are allowed for specificity (e.g., "right person shirt") but must be free of punctuation or special symbols.- new_a: The desired 'a' channel value in LAB space (green-red axis, 0-255, with 128 as neutral).- new_b: The desired 'b' channel value in LAB space (blue-yellow axis, 0-255, with 128 as neutral).Each object must appear only once in the list. Example:[["hair", 80, 128], ["right person shirt", 180, 160]]
206
  input_img : Input image can be URL string of the image. Cannot be None.
207
 
208
  Returns:
@@ -220,7 +198,7 @@ def change_color_objects_lab(
220
  raise gr.Error("input img cannot be None or empty.")
221
  valid_pattern = re.compile(r"^[a-zA-Z\s]+$")
222
  print("before processing input:", user_input)
223
-
224
  for item in user_input:
225
  if len(item) != 3:
226
  raise gr.Error(
@@ -252,7 +230,7 @@ def change_color_objects_lab(
252
  raise gr.Error("new B must be in the range [0, 255]")
253
 
254
  print("after processing input:", user_input)
255
- func = modal.Function.from_name(modal_app_name, "change_image_objects_lab")
256
  output_pil = func.remote(image_pil=input_img, targets_config=user_input)
257
  if output_pil is None:
258
  raise ValueError("Received None from modal remote function.")
@@ -260,5 +238,13 @@ def change_color_objects_lab(
260
  raise TypeError(
261
  f"Expected Image.Image from modal remote function, got {type(output_pil)}"
262
  )
 
263
 
264
  return output_pil
 
 
 
 
 
 
 
 
9
  modal_app_name = "ImageAlfred"
10
 
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  def privacy_preserve_image(
13
  input_img,
14
  input_prompt,
15
  privacy_strength: int = 15,
 
16
  ) -> np.ndarray | Image.Image | str | Path | None:
17
  """
18
  Obscures specified objects in the input image based on a natural language prompt, using a privacy-preserving blur or distortion effect.
 
22
 
23
  Args:
24
  input_img: Input image or can be URL string of the image or base64 string. Cannot be None.
25
+ input_prompt (str): Object to obscure in the image has to be a dot-separated string. It can be a single word or multiple words, e.g., "left person face", "license plate" but it must be as short as possible and avoid using symbols or punctuation. Also you have to use single form of the word, e.g., "person" instead of "people", "face" instead of "faces". e.g. input_prompt = "face. right car. blue shirt."
26
  privacy_strength (int): Strength of the privacy preservation effect. Higher values result in stronger blurring. Default is 15.
 
27
  Returns:
28
  bytes: Binary image data of the modified image.
29
 
30
  example:
31
+ input_prompt = ["face", "license plate"]
32
  """ # noqa: E501
33
  if not input_img:
34
  raise gr.Error("Input image cannot be None or empty.")
35
+ valid_pattern = re.compile(r"^[a-zA-Z\s.]+$")
36
  if not input_prompt or input_prompt.strip() == "":
37
  raise gr.Error("Input prompt cannot be None or empty.")
38
+ if not valid_pattern.match(input_prompt):
39
+ raise gr.Error("Input prompt must contain only letters, spaces, and dots.")
 
 
40
 
41
+ func = modal.Function.from_name("ImageAlfred", "preserve_privacy")
42
  output_pil = func.remote(
43
  image_pil=input_img,
44
+ prompt=input_prompt,
45
  privacy_strength=privacy_strength,
 
46
  )
47
 
48
  if output_pil is None:
 
61
  ) -> np.ndarray | Image.Image | str | Path | None:
62
  """
63
  Changes the hue and saturation of specified objects in an image using the HSV color space.
64
+
65
+ This function segments objects in the image based on a user-provided text prompt, then
66
+ modifies their hue and saturation in the HSV (Hue, Saturation, Value) space. HSV is intuitive
67
+ for color manipulation where users think in terms of basic color categories and intensity,
68
+ making it useful for broad, vivid color shifts.
69
+
70
  Use this method when:
71
+ - Performing broad color changes or visual effects (e.g., turning a shirt from red to blue).
72
+ - Needing intuitive control over color categories (e.g., shifting everything that's red to purple).
73
+ - Saturation and vibrancy manipulation are more important than accurate perceptual matching.
74
+
75
+ OpenCV HSV Ranges:
76
+ - H: 0-179 (Hue angle on color wheel, where 0 = red, 60 = green, 120 = blue, etc.)
77
+ - S: 0-255 (Saturation)
78
+ - V: 0-255 (Brightness)
79
+
80
+ Common HSV color references:
81
+ - Red: (Hue≈0), Green: (Hue≈60), Blue: (Hue≈120), Yellow: (Hue≈30), Purple: (Hue≈150)
82
+ - Typically used with Saturation=255 for vivid colors.
83
+
84
 
85
  Args:
86
  input_img: Input image or can be URL string of the image or base64 string. Cannot be None.
87
+ user_input : A list of target specifications for color transformation. Each inner list must contain exactly three elements in the following order: 1. target_object (str) - A short, human-readable description of the object to be modified.Multi-word descriptions are allowed for disambiguation (e.g., "right person shirt"), but they must be at most three words and concise and free of punctuation, symbols, or special characters.2. hue (int) - Desired hue value in the HSV color space, ranging from 0 to 179. Represents the color angle on the HSV color wheel (e.g., 0 = red, 60 = green, 120 = blue)3. saturation_scale (float) - A multiplicative scale factor applied to the current saturation of the object (must be > 0). For example, 1.0 preserves current saturation, 1.2 increases vibrancy, and 0.8 slightly desaturates. Each target object must be uniquely defined in the list to avoid conflicting transformations.Example: [["hair", 30, 1.2], ["right person shirt", 60, 1.0]]
88
 
89
  Returns:
90
  Base64-encoded string.
91
 
92
  Raises:
93
+ ValueError: If user_input format is invalid, hue values are outside [0, 179] range, saturation_scale is not positive, or image format is invalid or corrupted.
94
  TypeError: If input_img is not a supported type or modal function returns unexpected type.
95
  """ # noqa: E501
96
  if len(user_input) == 0 or not isinstance(user_input, list):
 
99
  )
100
  if not input_img:
101
  raise gr.Error("input img cannot be None or empty.")
102
+
103
  print("before processing input:", user_input)
104
  valid_pattern = re.compile(r"^[a-zA-Z\s]+$")
105
  for item in user_input:
106
+ if len(item) != 3:
107
  raise gr.Error(
108
+ "Each item in user_input must be a list of [object, hue, saturation_scale]" # noqa: E501
109
  )
110
  if not item[0] or not valid_pattern.match(item[0]):
111
  raise gr.Error(
 
114
 
115
  if not isinstance(item[0], str):
116
  item[0] = str(item[0])
117
+ if not item[1]:
118
+ raise gr.Error("Hue must be set and cannot be empty.")
119
+ if not isinstance(item[1], (int, float)):
120
+ try:
121
+ item[1] = int(item[1])
122
+ except ValueError:
123
+ raise gr.Error("Hue must be an integer.")
124
+ if item[1] < 0 or item[1] > 179:
125
+ raise gr.Error("Hue must be in the range [0, 179]")
126
+ if not item[2]:
127
+ raise gr.Error("Saturation scale must be set and cannot be empty.")
128
+ if not isinstance(item[2], (int, float)):
129
+ try:
130
+ item[2] = float(item[2])
131
+ except ValueError:
132
+ raise gr.Error("Saturation scale must be a float number.")
133
+ if item[2] <= 0:
134
+ raise gr.Error("Saturation scale must be greater than 0")
 
 
 
135
 
136
  print("after processing input:", user_input)
137
 
138
+ func = modal.Function.from_name("ImageAlfred", "change_image_objects_hsv")
139
  output_pil = func.remote(image_pil=input_img, targets_config=user_input)
140
 
141
  if output_pil is None:
 
180
  - Purple: (L=?, A≈180, B≈100)
181
 
182
  Args:
183
+ user_input: A list of color transformation instructions, each as a three-element list:[object_name (str), new_a (int, 0-255), new_b (int, 0-255)].- object_name: A short, unique identifier for the object to be recolored. Multi-word names are allowed for specificity (e.g., "right person shirt") but must be 3 words or fewer and free of punctuation or special symbols.- new_a: The desired 'a' channel value in LAB space (green-red axis, 0-255, with 128 as neutral).- new_b: The desired 'b' channel value in LAB space (blue-yellow axis, 0-255, with 128 as neutral).Each object must appear only once in the list. Example:[["hair", 80, 128], ["right person shirt", 180, 160]]
184
  input_img : Input image can be URL string of the image. Cannot be None.
185
 
186
  Returns:
 
198
  raise gr.Error("input img cannot be None or empty.")
199
  valid_pattern = re.compile(r"^[a-zA-Z\s]+$")
200
  print("before processing input:", user_input)
201
+
202
  for item in user_input:
203
  if len(item) != 3:
204
  raise gr.Error(
 
230
  raise gr.Error("new B must be in the range [0, 255]")
231
 
232
  print("after processing input:", user_input)
233
+ func = modal.Function.from_name("ImageAlfred", "change_image_objects_lab")
234
  output_pil = func.remote(image_pil=input_img, targets_config=user_input)
235
  if output_pil is None:
236
  raise ValueError("Received None from modal remote function.")
 
238
  raise TypeError(
239
  f"Expected Image.Image from modal remote function, got {type(output_pil)}"
240
  )
241
+ # img_link = upload_image_to_tmpfiles(output_pil)
242
 
243
  return output_pil
244
+
245
+
246
+ if __name__ == "__main__":
247
+ image_pil = Image.open("./src/assets/test_image.jpg")
248
+ change_color_objects_hsv(
249
+ user_input=[["hair", 30, 1.2], ["shirt", 60, 1.0]], input_img=image_pil
250
+ )
uv.lock CHANGED
@@ -831,6 +831,7 @@ version = "0.1.0"
831
  source = { virtual = "." }
832
  dependencies = [
833
  { name = "gradio", extra = ["mcp"] },
 
834
  { name = "modal" },
835
  { name = "numpy" },
836
  { name = "pillow" },
@@ -839,16 +840,14 @@ dependencies = [
839
  [package.dev-dependencies]
840
  dev = [
841
  { name = "jupyterlab" },
842
- { name = "matplotlib" },
843
  { name = "opencv-contrib-python" },
844
- { name = "rapidfuzz" },
845
  { name = "ruff" },
846
- { name = "supervision" },
847
  ]
848
 
849
  [package.metadata]
850
  requires-dist = [
851
  { name = "gradio", extras = ["mcp"], specifier = ">=5.32.1" },
 
852
  { name = "modal", specifier = ">=1.0.2" },
853
  { name = "numpy", specifier = ">=2.2.6" },
854
  { name = "pillow", specifier = ">=11.2.1" },
@@ -857,11 +856,8 @@ requires-dist = [
857
  [package.metadata.requires-dev]
858
  dev = [
859
  { name = "jupyterlab", specifier = ">=4.4.3" },
860
- { name = "matplotlib", specifier = ">=3.10.3" },
861
  { name = "opencv-contrib-python", specifier = ">=4.11.0.86" },
862
- { name = "rapidfuzz", specifier = ">=3.13.0" },
863
  { name = "ruff", specifier = ">=0.11.12" },
864
- { name = "supervision", specifier = ">=0.25.1" },
865
  ]
866
 
867
  [[package]]
@@ -1572,23 +1568,6 @@ wheels = [
1572
  { url = "https://files.pythonhosted.org/packages/0d/c6/146487546adc4726f0be591a65b466973feaa58cc3db711087e802e940fb/opencv_contrib_python-4.11.0.86-cp37-abi3-win_amd64.whl", hash = "sha256:654758a9ae8ca9a75fca7b64b19163636534f0eedffe1e14c3d7218988625c8d", size = 46185163, upload-time = "2025-01-16T13:52:39.745Z" },
1573
  ]
1574
 
1575
- [[package]]
1576
- name = "opencv-python"
1577
- version = "4.11.0.86"
1578
- source = { registry = "https://pypi.org/simple" }
1579
- dependencies = [
1580
- { name = "numpy" },
1581
- ]
1582
- sdist = { url = "https://files.pythonhosted.org/packages/17/06/68c27a523103dad5837dc5b87e71285280c4f098c60e4fe8a8db6486ab09/opencv-python-4.11.0.86.tar.gz", hash = "sha256:03d60ccae62304860d232272e4a4fda93c39d595780cb40b161b310244b736a4", size = 95171956, upload-time = "2025-01-16T13:52:24.737Z" }
1583
- wheels = [
1584
- { url = "https://files.pythonhosted.org/packages/05/4d/53b30a2a3ac1f75f65a59eb29cf2ee7207ce64867db47036ad61743d5a23/opencv_python-4.11.0.86-cp37-abi3-macosx_13_0_arm64.whl", hash = "sha256:432f67c223f1dc2824f5e73cdfcd9db0efc8710647d4e813012195dc9122a52a", size = 37326322, upload-time = "2025-01-16T13:52:25.887Z" },
1585
- { url = "https://files.pythonhosted.org/packages/3b/84/0a67490741867eacdfa37bc18df96e08a9d579583b419010d7f3da8ff503/opencv_python-4.11.0.86-cp37-abi3-macosx_13_0_x86_64.whl", hash = "sha256:9d05ef13d23fe97f575153558653e2d6e87103995d54e6a35db3f282fe1f9c66", size = 56723197, upload-time = "2025-01-16T13:55:21.222Z" },
1586
- { url = "https://files.pythonhosted.org/packages/f3/bd/29c126788da65c1fb2b5fb621b7fed0ed5f9122aa22a0868c5e2c15c6d23/opencv_python-4.11.0.86-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b92ae2c8852208817e6776ba1ea0d6b1e0a1b5431e971a2a0ddd2a8cc398202", size = 42230439, upload-time = "2025-01-16T13:51:35.822Z" },
1587
- { url = "https://files.pythonhosted.org/packages/2c/8b/90eb44a40476fa0e71e05a0283947cfd74a5d36121a11d926ad6f3193cc4/opencv_python-4.11.0.86-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b02611523803495003bd87362db3e1d2a0454a6a63025dc6658a9830570aa0d", size = 62986597, upload-time = "2025-01-16T13:52:08.836Z" },
1588
- { url = "https://files.pythonhosted.org/packages/fb/d7/1d5941a9dde095468b288d989ff6539dd69cd429dbf1b9e839013d21b6f0/opencv_python-4.11.0.86-cp37-abi3-win32.whl", hash = "sha256:810549cb2a4aedaa84ad9a1c92fbfdfc14090e2749cedf2c1589ad8359aa169b", size = 29384337, upload-time = "2025-01-16T13:52:13.549Z" },
1589
- { url = "https://files.pythonhosted.org/packages/a4/7d/f1c30a92854540bf789e9cd5dde7ef49bbe63f855b85a2e6b3db8135c591/opencv_python-4.11.0.86-cp37-abi3-win_amd64.whl", hash = "sha256:085ad9b77c18853ea66283e98affefe2de8cc4c1f43eda4c100cf9b2721142ec", size = 39488044, upload-time = "2025-01-16T13:52:21.928Z" },
1590
- ]
1591
-
1592
  [[package]]
1593
  name = "orjson"
1594
  version = "3.10.18"
@@ -2130,44 +2109,6 @@ wheels = [
2130
  { url = "https://files.pythonhosted.org/packages/05/4c/bf3cad0d64c3214ac881299c4562b815f05d503bccc513e3fd4fdc6f67e4/pyzmq-26.4.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:26a2a7451606b87f67cdeca2c2789d86f605da08b4bd616b1a9981605ca3a364", size = 1395540, upload-time = "2025-04-04T12:04:30.562Z" },
2131
  ]
2132
 
2133
- [[package]]
2134
- name = "rapidfuzz"
2135
- version = "3.13.0"
2136
- source = { registry = "https://pypi.org/simple" }
2137
- sdist = { url = "https://files.pythonhosted.org/packages/ed/f6/6895abc3a3d056b9698da3199b04c0e56226d530ae44a470edabf8b664f0/rapidfuzz-3.13.0.tar.gz", hash = "sha256:d2eaf3839e52cbcc0accbe9817a67b4b0fcf70aaeb229cfddc1c28061f9ce5d8", size = 57904226, upload-time = "2025-04-03T20:38:51.226Z" }
2138
- wheels = [
2139
- { url = "https://files.pythonhosted.org/packages/13/4b/a326f57a4efed8f5505b25102797a58e37ee11d94afd9d9422cb7c76117e/rapidfuzz-3.13.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a1a6a906ba62f2556372282b1ef37b26bca67e3d2ea957277cfcefc6275cca7", size = 1989501, upload-time = "2025-04-03T20:36:13.43Z" },
2140
- { url = "https://files.pythonhosted.org/packages/b7/53/1f7eb7ee83a06c400089ec7cb841cbd581c2edd7a4b21eb2f31030b88daa/rapidfuzz-3.13.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2fd0975e015b05c79a97f38883a11236f5a24cca83aa992bd2558ceaa5652b26", size = 1445379, upload-time = "2025-04-03T20:36:16.439Z" },
2141
- { url = "https://files.pythonhosted.org/packages/07/09/de8069a4599cc8e6d194e5fa1782c561151dea7d5e2741767137e2a8c1f0/rapidfuzz-3.13.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d4e13593d298c50c4f94ce453f757b4b398af3fa0fd2fde693c3e51195b7f69", size = 1405986, upload-time = "2025-04-03T20:36:18.447Z" },
2142
- { url = "https://files.pythonhosted.org/packages/5d/77/d9a90b39c16eca20d70fec4ca377fbe9ea4c0d358c6e4736ab0e0e78aaf6/rapidfuzz-3.13.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed6f416bda1c9133000009d84d9409823eb2358df0950231cc936e4bf784eb97", size = 5310809, upload-time = "2025-04-03T20:36:20.324Z" },
2143
- { url = "https://files.pythonhosted.org/packages/1e/7d/14da291b0d0f22262d19522afaf63bccf39fc027c981233fb2137a57b71f/rapidfuzz-3.13.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1dc82b6ed01acb536b94a43996a94471a218f4d89f3fdd9185ab496de4b2a981", size = 1629394, upload-time = "2025-04-03T20:36:22.256Z" },
2144
- { url = "https://files.pythonhosted.org/packages/b7/e4/79ed7e4fa58f37c0f8b7c0a62361f7089b221fe85738ae2dbcfb815e985a/rapidfuzz-3.13.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e9d824de871daa6e443b39ff495a884931970d567eb0dfa213d234337343835f", size = 1600544, upload-time = "2025-04-03T20:36:24.207Z" },
2145
- { url = "https://files.pythonhosted.org/packages/4e/20/e62b4d13ba851b0f36370060025de50a264d625f6b4c32899085ed51f980/rapidfuzz-3.13.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2d18228a2390375cf45726ce1af9d36ff3dc1f11dce9775eae1f1b13ac6ec50f", size = 3052796, upload-time = "2025-04-03T20:36:26.279Z" },
2146
- { url = "https://files.pythonhosted.org/packages/cd/8d/55fdf4387dec10aa177fe3df8dbb0d5022224d95f48664a21d6b62a5299d/rapidfuzz-3.13.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:9f5fe634c9482ec5d4a6692afb8c45d370ae86755e5f57aa6c50bfe4ca2bdd87", size = 2464016, upload-time = "2025-04-03T20:36:28.525Z" },
2147
- { url = "https://files.pythonhosted.org/packages/9b/be/0872f6a56c0f473165d3b47d4170fa75263dc5f46985755aa9bf2bbcdea1/rapidfuzz-3.13.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:694eb531889f71022b2be86f625a4209c4049e74be9ca836919b9e395d5e33b3", size = 7556725, upload-time = "2025-04-03T20:36:30.629Z" },
2148
- { url = "https://files.pythonhosted.org/packages/5d/f3/6c0750e484d885a14840c7a150926f425d524982aca989cdda0bb3bdfa57/rapidfuzz-3.13.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:11b47b40650e06147dee5e51a9c9ad73bb7b86968b6f7d30e503b9f8dd1292db", size = 2859052, upload-time = "2025-04-03T20:36:32.836Z" },
2149
- { url = "https://files.pythonhosted.org/packages/6f/98/5a3a14701b5eb330f444f7883c9840b43fb29c575e292e09c90a270a6e07/rapidfuzz-3.13.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:98b8107ff14f5af0243f27d236bcc6e1ef8e7e3b3c25df114e91e3a99572da73", size = 3390219, upload-time = "2025-04-03T20:36:35.062Z" },
2150
- { url = "https://files.pythonhosted.org/packages/e9/7d/f4642eaaeb474b19974332f2a58471803448be843033e5740965775760a5/rapidfuzz-3.13.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b836f486dba0aceb2551e838ff3f514a38ee72b015364f739e526d720fdb823a", size = 4377924, upload-time = "2025-04-03T20:36:37.363Z" },
2151
- { url = "https://files.pythonhosted.org/packages/8e/83/fa33f61796731891c3e045d0cbca4436a5c436a170e7f04d42c2423652c3/rapidfuzz-3.13.0-cp312-cp312-win32.whl", hash = "sha256:4671ee300d1818d7bdfd8fa0608580d7778ba701817216f0c17fb29e6b972514", size = 1823915, upload-time = "2025-04-03T20:36:39.451Z" },
2152
- { url = "https://files.pythonhosted.org/packages/03/25/5ee7ab6841ca668567d0897905eebc79c76f6297b73bf05957be887e9c74/rapidfuzz-3.13.0-cp312-cp312-win_amd64.whl", hash = "sha256:6e2065f68fb1d0bf65adc289c1bdc45ba7e464e406b319d67bb54441a1b9da9e", size = 1616985, upload-time = "2025-04-03T20:36:41.631Z" },
2153
- { url = "https://files.pythonhosted.org/packages/76/5e/3f0fb88db396cb692aefd631e4805854e02120a2382723b90dcae720bcc6/rapidfuzz-3.13.0-cp312-cp312-win_arm64.whl", hash = "sha256:65cc97c2fc2c2fe23586599686f3b1ceeedeca8e598cfcc1b7e56dc8ca7e2aa7", size = 860116, upload-time = "2025-04-03T20:36:43.915Z" },
2154
- { url = "https://files.pythonhosted.org/packages/0a/76/606e71e4227790750f1646f3c5c873e18d6cfeb6f9a77b2b8c4dec8f0f66/rapidfuzz-3.13.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:09e908064d3684c541d312bd4c7b05acb99a2c764f6231bd507d4b4b65226c23", size = 1982282, upload-time = "2025-04-03T20:36:46.149Z" },
2155
- { url = "https://files.pythonhosted.org/packages/0a/f5/d0b48c6b902607a59fd5932a54e3518dae8223814db8349b0176e6e9444b/rapidfuzz-3.13.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:57c390336cb50d5d3bfb0cfe1467478a15733703af61f6dffb14b1cd312a6fae", size = 1439274, upload-time = "2025-04-03T20:36:48.323Z" },
2156
- { url = "https://files.pythonhosted.org/packages/59/cf/c3ac8c80d8ced6c1f99b5d9674d397ce5d0e9d0939d788d67c010e19c65f/rapidfuzz-3.13.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0da54aa8547b3c2c188db3d1c7eb4d1bb6dd80baa8cdaeaec3d1da3346ec9caa", size = 1399854, upload-time = "2025-04-03T20:36:50.294Z" },
2157
- { url = "https://files.pythonhosted.org/packages/09/5d/ca8698e452b349c8313faf07bfa84e7d1c2d2edf7ccc67bcfc49bee1259a/rapidfuzz-3.13.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:df8e8c21e67afb9d7fbe18f42c6111fe155e801ab103c81109a61312927cc611", size = 5308962, upload-time = "2025-04-03T20:36:52.421Z" },
2158
- { url = "https://files.pythonhosted.org/packages/66/0a/bebada332854e78e68f3d6c05226b23faca79d71362509dbcf7b002e33b7/rapidfuzz-3.13.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:461fd13250a2adf8e90ca9a0e1e166515cbcaa5e9c3b1f37545cbbeff9e77f6b", size = 1625016, upload-time = "2025-04-03T20:36:54.639Z" },
2159
- { url = "https://files.pythonhosted.org/packages/de/0c/9e58d4887b86d7121d1c519f7050d1be5eb189d8a8075f5417df6492b4f5/rapidfuzz-3.13.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c2b3dd5d206a12deca16870acc0d6e5036abeb70e3cad6549c294eff15591527", size = 1600414, upload-time = "2025-04-03T20:36:56.669Z" },
2160
- { url = "https://files.pythonhosted.org/packages/9b/df/6096bc669c1311568840bdcbb5a893edc972d1c8d2b4b4325c21d54da5b1/rapidfuzz-3.13.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1343d745fbf4688e412d8f398c6e6d6f269db99a54456873f232ba2e7aeb4939", size = 3053179, upload-time = "2025-04-03T20:36:59.366Z" },
2161
- { url = "https://files.pythonhosted.org/packages/f9/46/5179c583b75fce3e65a5cd79a3561bd19abd54518cb7c483a89b284bf2b9/rapidfuzz-3.13.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b1b065f370d54551dcc785c6f9eeb5bd517ae14c983d2784c064b3aa525896df", size = 2456856, upload-time = "2025-04-03T20:37:01.708Z" },
2162
- { url = "https://files.pythonhosted.org/packages/6b/64/e9804212e3286d027ac35bbb66603c9456c2bce23f823b67d2f5cabc05c1/rapidfuzz-3.13.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:11b125d8edd67e767b2295eac6eb9afe0b1cdc82ea3d4b9257da4b8e06077798", size = 7567107, upload-time = "2025-04-03T20:37:04.521Z" },
2163
- { url = "https://files.pythonhosted.org/packages/8a/f2/7d69e7bf4daec62769b11757ffc31f69afb3ce248947aadbb109fefd9f65/rapidfuzz-3.13.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:c33f9c841630b2bb7e69a3fb5c84a854075bb812c47620978bddc591f764da3d", size = 2854192, upload-time = "2025-04-03T20:37:06.905Z" },
2164
- { url = "https://files.pythonhosted.org/packages/05/21/ab4ad7d7d0f653e6fe2e4ccf11d0245092bef94cdff587a21e534e57bda8/rapidfuzz-3.13.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:ae4574cb66cf1e85d32bb7e9ec45af5409c5b3970b7ceb8dea90168024127566", size = 3398876, upload-time = "2025-04-03T20:37:09.692Z" },
2165
- { url = "https://files.pythonhosted.org/packages/0f/a8/45bba94c2489cb1ee0130dcb46e1df4fa2c2b25269e21ffd15240a80322b/rapidfuzz-3.13.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e05752418b24bbd411841b256344c26f57da1148c5509e34ea39c7eb5099ab72", size = 4377077, upload-time = "2025-04-03T20:37:11.929Z" },
2166
- { url = "https://files.pythonhosted.org/packages/0c/f3/5e0c6ae452cbb74e5436d3445467447e8c32f3021f48f93f15934b8cffc2/rapidfuzz-3.13.0-cp313-cp313-win32.whl", hash = "sha256:0e1d08cb884805a543f2de1f6744069495ef527e279e05370dd7c83416af83f8", size = 1822066, upload-time = "2025-04-03T20:37:14.425Z" },
2167
- { url = "https://files.pythonhosted.org/packages/96/e3/a98c25c4f74051df4dcf2f393176b8663bfd93c7afc6692c84e96de147a2/rapidfuzz-3.13.0-cp313-cp313-win_amd64.whl", hash = "sha256:9a7c6232be5f809cd39da30ee5d24e6cadd919831e6020ec6c2391f4c3bc9264", size = 1615100, upload-time = "2025-04-03T20:37:16.611Z" },
2168
- { url = "https://files.pythonhosted.org/packages/60/b1/05cd5e697c00cd46d7791915f571b38c8531f714832eff2c5e34537c49ee/rapidfuzz-3.13.0-cp313-cp313-win_arm64.whl", hash = "sha256:3f32f15bacd1838c929b35c84b43618481e1b3d7a61b5ed2db0291b70ae88b53", size = 858976, upload-time = "2025-04-03T20:37:19.336Z" },
2169
- ]
2170
-
2171
  [[package]]
2172
  name = "referencing"
2173
  version = "0.36.2"
@@ -2317,44 +2258,6 @@ wheels = [
2317
  { url = "https://files.pythonhosted.org/packages/4d/c0/1108ad9f01567f66b3154063605b350b69c3c9366732e09e45f9fd0d1deb/safehttpx-0.1.6-py3-none-any.whl", hash = "sha256:407cff0b410b071623087c63dd2080c3b44dc076888d8c5823c00d1e58cb381c", size = 8692, upload-time = "2024-12-02T18:44:08.555Z" },
2318
  ]
2319
 
2320
- [[package]]
2321
- name = "scipy"
2322
- version = "1.15.3"
2323
- source = { registry = "https://pypi.org/simple" }
2324
- dependencies = [
2325
- { name = "numpy" },
2326
- ]
2327
- sdist = { url = "https://files.pythonhosted.org/packages/0f/37/6964b830433e654ec7485e45a00fc9a27cf868d622838f6b6d9c5ec0d532/scipy-1.15.3.tar.gz", hash = "sha256:eae3cf522bc7df64b42cad3925c876e1b0b6c35c1337c93e12c0f366f55b0eaf", size = 59419214, upload-time = "2025-05-08T16:13:05.955Z" }
2328
- wheels = [
2329
- { url = "https://files.pythonhosted.org/packages/37/4b/683aa044c4162e10ed7a7ea30527f2cbd92e6999c10a8ed8edb253836e9c/scipy-1.15.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6ac6310fdbfb7aa6612408bd2f07295bcbd3fda00d2d702178434751fe48e019", size = 38766735, upload-time = "2025-05-08T16:06:06.471Z" },
2330
- { url = "https://files.pythonhosted.org/packages/7b/7e/f30be3d03de07f25dc0ec926d1681fed5c732d759ac8f51079708c79e680/scipy-1.15.3-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:185cd3d6d05ca4b44a8f1595af87f9c372bb6acf9c808e99aa3e9aa03bd98cf6", size = 30173284, upload-time = "2025-05-08T16:06:11.686Z" },
2331
- { url = "https://files.pythonhosted.org/packages/07/9c/0ddb0d0abdabe0d181c1793db51f02cd59e4901da6f9f7848e1f96759f0d/scipy-1.15.3-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:05dc6abcd105e1a29f95eada46d4a3f251743cfd7d3ae8ddb4088047f24ea477", size = 22446958, upload-time = "2025-05-08T16:06:15.97Z" },
2332
- { url = "https://files.pythonhosted.org/packages/af/43/0bce905a965f36c58ff80d8bea33f1f9351b05fad4beaad4eae34699b7a1/scipy-1.15.3-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:06efcba926324df1696931a57a176c80848ccd67ce6ad020c810736bfd58eb1c", size = 25242454, upload-time = "2025-05-08T16:06:20.394Z" },
2333
- { url = "https://files.pythonhosted.org/packages/56/30/a6f08f84ee5b7b28b4c597aca4cbe545535c39fe911845a96414700b64ba/scipy-1.15.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c05045d8b9bfd807ee1b9f38761993297b10b245f012b11b13b91ba8945f7e45", size = 35210199, upload-time = "2025-05-08T16:06:26.159Z" },
2334
- { url = "https://files.pythonhosted.org/packages/0b/1f/03f52c282437a168ee2c7c14a1a0d0781a9a4a8962d84ac05c06b4c5b555/scipy-1.15.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:271e3713e645149ea5ea3e97b57fdab61ce61333f97cfae392c28ba786f9bb49", size = 37309455, upload-time = "2025-05-08T16:06:32.778Z" },
2335
- { url = "https://files.pythonhosted.org/packages/89/b1/fbb53137f42c4bf630b1ffdfc2151a62d1d1b903b249f030d2b1c0280af8/scipy-1.15.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6cfd56fc1a8e53f6e89ba3a7a7251f7396412d655bca2aa5611c8ec9a6784a1e", size = 36885140, upload-time = "2025-05-08T16:06:39.249Z" },
2336
- { url = "https://files.pythonhosted.org/packages/2e/2e/025e39e339f5090df1ff266d021892694dbb7e63568edcfe43f892fa381d/scipy-1.15.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:0ff17c0bb1cb32952c09217d8d1eed9b53d1463e5f1dd6052c7857f83127d539", size = 39710549, upload-time = "2025-05-08T16:06:45.729Z" },
2337
- { url = "https://files.pythonhosted.org/packages/e6/eb/3bf6ea8ab7f1503dca3a10df2e4b9c3f6b3316df07f6c0ded94b281c7101/scipy-1.15.3-cp312-cp312-win_amd64.whl", hash = "sha256:52092bc0472cfd17df49ff17e70624345efece4e1a12b23783a1ac59a1b728ed", size = 40966184, upload-time = "2025-05-08T16:06:52.623Z" },
2338
- { url = "https://files.pythonhosted.org/packages/73/18/ec27848c9baae6e0d6573eda6e01a602e5649ee72c27c3a8aad673ebecfd/scipy-1.15.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2c620736bcc334782e24d173c0fdbb7590a0a436d2fdf39310a8902505008759", size = 38728256, upload-time = "2025-05-08T16:06:58.696Z" },
2339
- { url = "https://files.pythonhosted.org/packages/74/cd/1aef2184948728b4b6e21267d53b3339762c285a46a274ebb7863c9e4742/scipy-1.15.3-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:7e11270a000969409d37ed399585ee530b9ef6aa99d50c019de4cb01e8e54e62", size = 30109540, upload-time = "2025-05-08T16:07:04.209Z" },
2340
- { url = "https://files.pythonhosted.org/packages/5b/d8/59e452c0a255ec352bd0a833537a3bc1bfb679944c4938ab375b0a6b3a3e/scipy-1.15.3-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:8c9ed3ba2c8a2ce098163a9bdb26f891746d02136995df25227a20e71c396ebb", size = 22383115, upload-time = "2025-05-08T16:07:08.998Z" },
2341
- { url = "https://files.pythonhosted.org/packages/08/f5/456f56bbbfccf696263b47095291040655e3cbaf05d063bdc7c7517f32ac/scipy-1.15.3-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:0bdd905264c0c9cfa74a4772cdb2070171790381a5c4d312c973382fc6eaf730", size = 25163884, upload-time = "2025-05-08T16:07:14.091Z" },
2342
- { url = "https://files.pythonhosted.org/packages/a2/66/a9618b6a435a0f0c0b8a6d0a2efb32d4ec5a85f023c2b79d39512040355b/scipy-1.15.3-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79167bba085c31f38603e11a267d862957cbb3ce018d8b38f79ac043bc92d825", size = 35174018, upload-time = "2025-05-08T16:07:19.427Z" },
2343
- { url = "https://files.pythonhosted.org/packages/b5/09/c5b6734a50ad4882432b6bb7c02baf757f5b2f256041da5df242e2d7e6b6/scipy-1.15.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9deabd6d547aee2c9a81dee6cc96c6d7e9a9b1953f74850c179f91fdc729cb7", size = 37269716, upload-time = "2025-05-08T16:07:25.712Z" },
2344
- { url = "https://files.pythonhosted.org/packages/77/0a/eac00ff741f23bcabd352731ed9b8995a0a60ef57f5fd788d611d43d69a1/scipy-1.15.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:dde4fc32993071ac0c7dd2d82569e544f0bdaff66269cb475e0f369adad13f11", size = 36872342, upload-time = "2025-05-08T16:07:31.468Z" },
2345
- { url = "https://files.pythonhosted.org/packages/fe/54/4379be86dd74b6ad81551689107360d9a3e18f24d20767a2d5b9253a3f0a/scipy-1.15.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f77f853d584e72e874d87357ad70f44b437331507d1c311457bed8ed2b956126", size = 39670869, upload-time = "2025-05-08T16:07:38.002Z" },
2346
- { url = "https://files.pythonhosted.org/packages/87/2e/892ad2862ba54f084ffe8cc4a22667eaf9c2bcec6d2bff1d15713c6c0703/scipy-1.15.3-cp313-cp313-win_amd64.whl", hash = "sha256:b90ab29d0c37ec9bf55424c064312930ca5f4bde15ee8619ee44e69319aab163", size = 40988851, upload-time = "2025-05-08T16:08:33.671Z" },
2347
- { url = "https://files.pythonhosted.org/packages/1b/e9/7a879c137f7e55b30d75d90ce3eb468197646bc7b443ac036ae3fe109055/scipy-1.15.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:3ac07623267feb3ae308487c260ac684b32ea35fd81e12845039952f558047b8", size = 38863011, upload-time = "2025-05-08T16:07:44.039Z" },
2348
- { url = "https://files.pythonhosted.org/packages/51/d1/226a806bbd69f62ce5ef5f3ffadc35286e9fbc802f606a07eb83bf2359de/scipy-1.15.3-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:6487aa99c2a3d509a5227d9a5e889ff05830a06b2ce08ec30df6d79db5fcd5c5", size = 30266407, upload-time = "2025-05-08T16:07:49.891Z" },
2349
- { url = "https://files.pythonhosted.org/packages/e5/9b/f32d1d6093ab9eeabbd839b0f7619c62e46cc4b7b6dbf05b6e615bbd4400/scipy-1.15.3-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:50f9e62461c95d933d5c5ef4a1f2ebf9a2b4e83b0db374cb3f1de104d935922e", size = 22540030, upload-time = "2025-05-08T16:07:54.121Z" },
2350
- { url = "https://files.pythonhosted.org/packages/e7/29/c278f699b095c1a884f29fda126340fcc201461ee8bfea5c8bdb1c7c958b/scipy-1.15.3-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:14ed70039d182f411ffc74789a16df3835e05dc469b898233a245cdfd7f162cb", size = 25218709, upload-time = "2025-05-08T16:07:58.506Z" },
2351
- { url = "https://files.pythonhosted.org/packages/24/18/9e5374b617aba742a990581373cd6b68a2945d65cc588482749ef2e64467/scipy-1.15.3-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a769105537aa07a69468a0eefcd121be52006db61cdd8cac8a0e68980bbb723", size = 34809045, upload-time = "2025-05-08T16:08:03.929Z" },
2352
- { url = "https://files.pythonhosted.org/packages/e1/fe/9c4361e7ba2927074360856db6135ef4904d505e9b3afbbcb073c4008328/scipy-1.15.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9db984639887e3dffb3928d118145ffe40eff2fa40cb241a306ec57c219ebbbb", size = 36703062, upload-time = "2025-05-08T16:08:09.558Z" },
2353
- { url = "https://files.pythonhosted.org/packages/b7/8e/038ccfe29d272b30086b25a4960f757f97122cb2ec42e62b460d02fe98e9/scipy-1.15.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:40e54d5c7e7ebf1aa596c374c49fa3135f04648a0caabcb66c52884b943f02b4", size = 36393132, upload-time = "2025-05-08T16:08:15.34Z" },
2354
- { url = "https://files.pythonhosted.org/packages/10/7e/5c12285452970be5bdbe8352c619250b97ebf7917d7a9a9e96b8a8140f17/scipy-1.15.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:5e721fed53187e71d0ccf382b6bf977644c533e506c4d33c3fb24de89f5c3ed5", size = 38979503, upload-time = "2025-05-08T16:08:21.513Z" },
2355
- { url = "https://files.pythonhosted.org/packages/81/06/0a5e5349474e1cbc5757975b21bd4fad0e72ebf138c5592f191646154e06/scipy-1.15.3-cp313-cp313t-win_amd64.whl", hash = "sha256:76ad1fb5f8752eabf0fa02e4cc0336b4e8f021e2d5f061ed37d6d264db35e3ca", size = 40308097, upload-time = "2025-05-08T16:08:27.627Z" },
2356
- ]
2357
-
2358
  [[package]]
2359
  name = "semantic-version"
2360
  version = "2.10.0"
@@ -2468,27 +2371,6 @@ wheels = [
2468
  { url = "https://files.pythonhosted.org/packages/8b/0c/9d30a4ebeb6db2b25a841afbb80f6ef9a854fc3b41be131d249a977b4959/starlette-0.46.2-py3-none-any.whl", hash = "sha256:595633ce89f8ffa71a015caed34a5b2dc1c0cdb3f0f1fbd1e69339cf2abeec35", size = 72037, upload-time = "2025-04-13T13:56:16.21Z" },
2469
  ]
2470
 
2471
- [[package]]
2472
- name = "supervision"
2473
- version = "0.25.1"
2474
- source = { registry = "https://pypi.org/simple" }
2475
- dependencies = [
2476
- { name = "contourpy" },
2477
- { name = "defusedxml" },
2478
- { name = "matplotlib" },
2479
- { name = "numpy" },
2480
- { name = "opencv-python" },
2481
- { name = "pillow" },
2482
- { name = "pyyaml" },
2483
- { name = "requests" },
2484
- { name = "scipy" },
2485
- { name = "tqdm" },
2486
- ]
2487
- sdist = { url = "https://files.pythonhosted.org/packages/4c/87/3daaa3aec1766f93d4c07d33f933a5ded0a6243a099b6b399b6268053bfe/supervision-0.25.1.tar.gz", hash = "sha256:61781b4abe4fa6ff95c58af6aec7dd3451a78e7e6a797e9ea2787f93771dd031", size = 146657, upload-time = "2024-12-13T13:12:10.64Z" }
2488
- wheels = [
2489
- { url = "https://files.pythonhosted.org/packages/c1/24/d3bcad7ece751166ed308c6deb7e7d02a62a7f5a6e01e61ff2787c538fb0/supervision-0.25.1-py3-none-any.whl", hash = "sha256:ebc015c22983bc64563beda75f5f529e465e4020b318da07948ce03148307a72", size = 181480, upload-time = "2024-12-13T13:12:08.1Z" },
2490
- ]
2491
-
2492
  [[package]]
2493
  name = "synchronicity"
2494
  version = "0.9.12"
 
831
  source = { virtual = "." }
832
  dependencies = [
833
  { name = "gradio", extra = ["mcp"] },
834
+ { name = "matplotlib" },
835
  { name = "modal" },
836
  { name = "numpy" },
837
  { name = "pillow" },
 
840
  [package.dev-dependencies]
841
  dev = [
842
  { name = "jupyterlab" },
 
843
  { name = "opencv-contrib-python" },
 
844
  { name = "ruff" },
 
845
  ]
846
 
847
  [package.metadata]
848
  requires-dist = [
849
  { name = "gradio", extras = ["mcp"], specifier = ">=5.32.1" },
850
+ { name = "matplotlib", specifier = ">=3.10.3" },
851
  { name = "modal", specifier = ">=1.0.2" },
852
  { name = "numpy", specifier = ">=2.2.6" },
853
  { name = "pillow", specifier = ">=11.2.1" },
 
856
  [package.metadata.requires-dev]
857
  dev = [
858
  { name = "jupyterlab", specifier = ">=4.4.3" },
 
859
  { name = "opencv-contrib-python", specifier = ">=4.11.0.86" },
 
860
  { name = "ruff", specifier = ">=0.11.12" },
 
861
  ]
862
 
863
  [[package]]
 
1568
  { url = "https://files.pythonhosted.org/packages/0d/c6/146487546adc4726f0be591a65b466973feaa58cc3db711087e802e940fb/opencv_contrib_python-4.11.0.86-cp37-abi3-win_amd64.whl", hash = "sha256:654758a9ae8ca9a75fca7b64b19163636534f0eedffe1e14c3d7218988625c8d", size = 46185163, upload-time = "2025-01-16T13:52:39.745Z" },
1569
  ]
1570
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1571
  [[package]]
1572
  name = "orjson"
1573
  version = "3.10.18"
 
2109
  { url = "https://files.pythonhosted.org/packages/05/4c/bf3cad0d64c3214ac881299c4562b815f05d503bccc513e3fd4fdc6f67e4/pyzmq-26.4.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:26a2a7451606b87f67cdeca2c2789d86f605da08b4bd616b1a9981605ca3a364", size = 1395540, upload-time = "2025-04-04T12:04:30.562Z" },
2110
  ]
2111
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2112
  [[package]]
2113
  name = "referencing"
2114
  version = "0.36.2"
 
2258
  { url = "https://files.pythonhosted.org/packages/4d/c0/1108ad9f01567f66b3154063605b350b69c3c9366732e09e45f9fd0d1deb/safehttpx-0.1.6-py3-none-any.whl", hash = "sha256:407cff0b410b071623087c63dd2080c3b44dc076888d8c5823c00d1e58cb381c", size = 8692, upload-time = "2024-12-02T18:44:08.555Z" },
2259
  ]
2260
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2261
  [[package]]
2262
  name = "semantic-version"
2263
  version = "2.10.0"
 
2371
  { url = "https://files.pythonhosted.org/packages/8b/0c/9d30a4ebeb6db2b25a841afbb80f6ef9a854fc3b41be131d249a977b4959/starlette-0.46.2-py3-none-any.whl", hash = "sha256:595633ce89f8ffa71a015caed34a5b2dc1c0cdb3f0f1fbd1e69339cf2abeec35", size = 72037, upload-time = "2025-04-13T13:56:16.21Z" },
2372
  ]
2373
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2374
  [[package]]
2375
  name = "synchronicity"
2376
  version = "0.9.12"