Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
|
|
3 |
pipeline_tag: mask-generation
|
4 |
library_name: sam2
|
5 |
---
|
6 |
-
Model cloned from this [repo](https://github.com/facebookresearch/segment-anything-2/)
|
7 |
Repository for SAM 2: Segment Anything in Images and Videos, a foundation model towards solving promptable visual segmentation in images and videos from FAIR. See the [SAM 2 paper](https://arxiv.org/abs/2408.00714) for more information.
|
8 |
|
9 |
The official code is publicly release in this [repo](https://github.com/facebookresearch/segment-anything-2/).
|
@@ -16,7 +16,7 @@ For image prediction:
|
|
16 |
import torch
|
17 |
from sam2.sam2_image_predictor import SAM2ImagePredictor
|
18 |
|
19 |
-
predictor = SAM2ImagePredictor.from_pretrained("
|
20 |
|
21 |
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
|
22 |
predictor.set_image(<your_image>)
|
@@ -29,7 +29,7 @@ For video prediction:
|
|
29 |
import torch
|
30 |
from sam2.sam2_video_predictor import SAM2VideoPredictor
|
31 |
|
32 |
-
predictor = SAM2VideoPredictor.from_pretrained("
|
33 |
|
34 |
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
|
35 |
state = predictor.init_state(<your_video>)
|
|
|
3 |
pipeline_tag: mask-generation
|
4 |
library_name: sam2
|
5 |
---
|
6 |
+
Model cloned from this [repo](https://github.com/facebookresearch/segment-anything-2/) and will be finetuned.
|
7 |
Repository for SAM 2: Segment Anything in Images and Videos, a foundation model towards solving promptable visual segmentation in images and videos from FAIR. See the [SAM 2 paper](https://arxiv.org/abs/2408.00714) for more information.
|
8 |
|
9 |
The official code is publicly release in this [repo](https://github.com/facebookresearch/segment-anything-2/).
|
|
|
16 |
import torch
|
17 |
from sam2.sam2_image_predictor import SAM2ImagePredictor
|
18 |
|
19 |
+
predictor = SAM2ImagePredictor.from_pretrained("iloncka/culico-net-segm-v1-nano")
|
20 |
|
21 |
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
|
22 |
predictor.set_image(<your_image>)
|
|
|
29 |
import torch
|
30 |
from sam2.sam2_video_predictor import SAM2VideoPredictor
|
31 |
|
32 |
+
predictor = SAM2VideoPredictor.from_pretrained("iloncka/culico-net-segm-v1-nano")
|
33 |
|
34 |
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
|
35 |
state = predictor.init_state(<your_video>)
|