Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,31 @@ pinned: false
|
|
10 |
license: mit
|
11 |
---
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
license: mit
|
11 |
---
|
12 |
|
13 |
+
# CLIP Segmentation
|
14 |
+
|
15 |
+
CLIP Segmentation Project leverages the power of OpenAI's CLIP model combined with a segmentation decoder to perform image segmentation based on textual prompts. Provide an image and a text prompt, and get segmented masks for each prompt.
|
16 |
+
|
17 |
+
|
18 |
+
## Features
|
19 |
+
|
20 |
+
- **Textual Prompt Segmentation**: Segment images based on textual prompts.
|
21 |
+
- **Multiple Prompts**: Support for multiple prompts separated by commas.
|
22 |
+
- **Interactive UI**: User-friendly interface for easy image uploads and prompt inputs.
|
23 |
+
|
24 |
+
|
25 |
+
## Usage
|
26 |
+
|
27 |
+
1. Upload an image using the provided interface.
|
28 |
+
2. Enter your text prompts separated by commas.
|
29 |
+
3. Click on "Visualize Segments" to get the segmented masks.
|
30 |
+
4. Hover over a class to view the individual segment.
|
31 |
+
|
32 |
+
## How It Works
|
33 |
+
|
34 |
+
The CLIP Segmentation Project combines the power of a pretrained CLIP model with a segmentation decoder. The CLIP model, developed by OpenAI, understands images paired with natural language. By combining this with a segmentation decoder, we can generate segmented masks for images based on textual prompts, bridging the gap between vision and language in a unique way.
|
35 |
+
|
36 |
+
|
37 |
+
## Acknowledgements
|
38 |
+
|
39 |
+
- Thanks to [OpenAI](https://openai.com/) for the CLIP model.
|
40 |
+
- Thanks to [Image Segmentation Using Text and Image Prompts](https://github.com/timojl/clipseg).
|