Beom0 commited on
Commit
55810aa
β€’
1 Parent(s): 0301cdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -14,7 +14,16 @@ tags:
14
  # ZIM-Anything-ViTB
15
 
16
  ## Introduction
17
- Zero-Shot Image Matting
 
 
 
 
 
 
 
 
 
18
 
19
  ## Installation
20
 
@@ -32,13 +41,15 @@ cd ZIM; pip install -e .
32
 
33
  ## Usage
34
 
35
- Download the [model](https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth?download=true) first and put it under the `results` directory.
 
 
36
 
37
  ```python
38
  from zim_anything import zim_model_registry, ZimPredictor
39
 
40
  backbone = "vit_b"
41
- ckpt_p = "results/zim_vit_b_2043"
42
 
43
  model = zim_model_registry[backbone](checkpoint=ckpt_p)
44
  if torch.cuda.is_available():
 
14
  # ZIM-Anything-ViTB
15
 
16
  ## Introduction
17
+
18
+ πŸš€ Introducing ZIM: Zero-Shot Image Matting – A Step Beyond SAM! πŸš€
19
+
20
+ While SAM (Segment Anything Model) has redefined zero-shot segmentation with broad applications across multiple fields, it often falls short in delivering high-precision, fine-grained masks. That’s where ZIM comes in.
21
+
22
+ 🌟 What is ZIM? 🌟
23
+ ZIM (Zero-Shot Image Matting) is a groundbreaking model developed to set a new standard in precision matting while maintaining strong zero-shot capabilities. Like SAM, ZIM can generalize across diverse datasets and objects in a zero-shot paradigm. But ZIM goes beyond, delivering highly accurate, fine-grained masks that capture intricate details.
24
+
25
+ πŸ” Get Started with ZIM πŸ”
26
+ Ready to elevate your AI projects with unmatched matting quality? Access ZIM on our project page: https://naver-ai.github.io/ZIM/
27
 
28
  ## Installation
29
 
 
41
 
42
  ## Usage
43
 
44
+ 1. Make the directory `zim_vit_b_2043`.
45
+ 2. Download the [encoder](https://huggingface.co/naver-iv/zim-anything-vitb/resolve/main/zim_vit_b_2043/encoder.onnx?download=true) weight and [decoder](https://huggingface.co/naver-iv/zim-anything-vitb/resolve/main/zim_vit_b_2043/decoder.onnx?download=true) weight.
46
+ 3. Put them under the `zim_vit_b_2043` directory.
47
 
48
  ```python
49
  from zim_anything import zim_model_registry, ZimPredictor
50
 
51
  backbone = "vit_b"
52
+ ckpt_p = "zim_vit_b_2043"
53
 
54
  model = zim_model_registry[backbone](checkpoint=ckpt_p)
55
  if torch.cuda.is_available():