Spaces:
Runtime error
Runtime error
Simon Le Goff
commited on
Commit
•
1e03c2b
1
Parent(s):
04ea210
Update description and title.
Browse files
app.py
CHANGED
@@ -62,7 +62,25 @@ def query(
|
|
62 |
|
63 |
|
64 |
description = """
|
65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
"""
|
67 |
|
68 |
demo_inputs = [
|
@@ -103,7 +121,7 @@ demo = gr.Interface(
|
|
103 |
fn=query,
|
104 |
inputs=demo_inputs,
|
105 |
outputs="image",
|
106 |
-
title="pollen-vision",
|
107 |
description=description,
|
108 |
examples=demo_examples,
|
109 |
)
|
|
|
62 |
|
63 |
|
64 |
description = """
|
65 |
+
Welcome to the demo of pollen-vision, a simple and unified Python library to zero-shot computer vision models curated
|
66 |
+
for robotics use cases. **Pollen-vision** is designed for ease of installation and use, composed of independent modules
|
67 |
+
that can be combined to create a 3D object detection pipeline, getting the position of the objects in 3D space (x, y, z).
|
68 |
+
|
69 |
+
\n\nIn this demo, you have the option to choose between two tasks: object detection and object detection + segmentation.
|
70 |
+
The models available are:
|
71 |
+
|
72 |
+
- **OWL-VIT** (Open World Localization - Vision Transformer, By Google Research): this model performs text-conditionned
|
73 |
+
zero-shot 2D object localization in RGB images.
|
74 |
+
- **Mobile SAM**: A lightweight version of the Segment Anything Model (SAM) by Meta AI. SAM is a zero shot image
|
75 |
+
segmentation model. It can be prompted with bounding boxes or points. (https://github.com/ChaoningZhang/MobileSAM)
|
76 |
+
|
77 |
+
\n\nYou can input images in this demo in three ways: either by trying out the provided examples, by uploading an image
|
78 |
+
of your choice, or by capturing an image from your computer's webcam.
|
79 |
+
Additionally, you should provide text queries representing a list of objects to detect. Separate each object with a comma.
|
80 |
+
The last input parameter is the detection threshold (ranging from 0 to 1), which defaults to 0.1.
|
81 |
+
|
82 |
+
\n\nCheck out our blog post introducing pollen-vision or its <a href="https://github.com/pollen-robotics/pollen-vision">
|
83 |
+
Github repository</a> for more info!
|
84 |
"""
|
85 |
|
86 |
demo_inputs = [
|
|
|
121 |
fn=query,
|
122 |
inputs=demo_inputs,
|
123 |
outputs="image",
|
124 |
+
title="Use zero-shot computer vision models with pollen-vision",
|
125 |
description=description,
|
126 |
examples=demo_examples,
|
127 |
)
|