hysts-bot commited on
Commit
ed7b856
·
verified ·
1 Parent(s): 1f693ce

Upload folder using huggingface_hub

Browse files
Files changed (9) hide show
  1. 0.codes.pt +1 -1
  2. 0.metadata.json +6 -1
  3. 0.residuals.pt +1 -1
  4. avg_residual.pt +1 -1
  5. buckets.pt +1 -1
  6. centroids.pt +1 -1
  7. ivf.pid.pt +2 -2
  8. metadata.json +4 -4
  9. plan.json +4 -4
0.codes.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9f62bfcea4267b93f217e0fcd4f097f3c77a5ccf0d9f84362adb6f12809277eb
3
  size 3492444
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:672f09ecb6c3b1c4137ec8c9d462457dfca2e0ae5a09beaf6af05db5e5e544ad
3
  size 3492444
0.metadata.json CHANGED
@@ -1 +1,6 @@
1
- {"passage_offset":0,"num_passages":5102,"num_embeddings":872828,"embedding_offset":0}
 
 
 
 
 
 
1
+ {
2
+ "passage_offset": 0,
3
+ "num_passages": 5102,
4
+ "num_embeddings": 872828,
5
+ "embedding_offset": 0
6
+ }
0.residuals.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8d01bd76f85631867054049a9346420bca5d7f5ffa4e38fd24d1c17ec07c97a
3
  size 55862192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a55d155fc87a5aac084fed4bcc9c94fb06e93ab467178d3df0f2d50dc1d569e4
3
  size 55862192
avg_residual.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2c2903649d2f05774f4aa69f34021fe2e10f53b7e4ac55a043dda6ebc4403d9
3
  size 1205
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f565abb23a3e7e08075ee2c3d3c744265e66bee7f2c4206b1ea7781b827882d7
3
  size 1205
buckets.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8d08f7aa5b8004fec3591dd4f77d32a96f32cd9eebed021e3c2754579c2e29f
3
  size 1432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26dfcc410469c8cbba6ab8da8ab25e5e255943fbeb1073d1780eb7ea471a554a
3
  size 1432
centroids.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d5ee9818cedbca3a688d543b0d4481b787abce1b7a773276093cde8dfa631cf8
3
  size 2098342
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c58cc7face271401a63a6f889571f5a94b2f063cd19e43c8c287353c413d14d
3
  size 2098342
ivf.pid.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:865340cf620b1ceedb6841e93b284b3a1e935892cef49436fc14a4fc5db08268
3
- size 2318680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:366a2f4875dadb9d65e616779dcc1d5d933ad9a0b411ffba94d2d01489fa57f2
3
+ size 2315032
metadata.json CHANGED
@@ -37,7 +37,7 @@
37
  "checkpoint":"colbert-ir/colbertv2.0",
38
  "triples":"/future/u/okhattab/root/unit/experiments/2021.10/downstream.distillation.round2.2_score/round2.nway6.cosine.ib/examples.64.json",
39
  "collection":[
40
- "list with 5071 elements starting with...",
41
  [
42
  "Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your pet dog in different images. In this paper, we propose a training-free Personalization approach for SAM, termed as PerSAM. Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior, and segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement. In this way, we effectively adapt SAM for private use without any training. To further alleviate the mask ambiguity, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance.",
43
  "Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance. Besides, our approach can also enhance DreamBooth to personalize Stable Diffusion for text-to-image generation, which discards the background disturbance for better target appearance learning. Code is released at https://github.com/ZrrSkywalker/Personalize-SAM",
@@ -50,7 +50,7 @@
50
  "root":".ragatouille/",
51
  "experiment":"colbert",
52
  "index_root":null,
53
- "name":"2024-09/11/01.44.54",
54
  "rank":0,
55
  "nranks":1,
56
  "amp":true,
@@ -59,8 +59,8 @@
59
  },
60
  "num_chunks":1,
61
  "num_partitions":8192,
62
- "num_embeddings":9576995,
63
- "avg_doclen":171.2003549596,
64
  "RAGatouille":{
65
  "index_config":{
66
  "index_type":"PLAID",
 
37
  "checkpoint":"colbert-ir/colbertv2.0",
38
  "triples":"/future/u/okhattab/root/unit/experiments/2021.10/downstream.distillation.round2.2_score/round2.nway6.cosine.ib/examples.64.json",
39
  "collection":[
40
+ "list with 5102 elements starting with...",
41
  [
42
  "Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your pet dog in different images. In this paper, we propose a training-free Personalization approach for SAM, termed as PerSAM. Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior, and segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement. In this way, we effectively adapt SAM for private use without any training. To further alleviate the mask ambiguity, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance.",
43
  "Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance. Besides, our approach can also enhance DreamBooth to personalize Stable Diffusion for text-to-image generation, which discards the background disturbance for better target appearance learning. Code is released at https://github.com/ZrrSkywalker/Personalize-SAM",
 
50
  "root":".ragatouille/",
51
  "experiment":"colbert",
52
  "index_root":null,
53
+ "name":"2024-09/12/12.02.32",
54
  "rank":0,
55
  "nranks":1,
56
  "amp":true,
 
59
  },
60
  "num_chunks":1,
61
  "num_partitions":8192,
62
+ "num_embeddings":872828,
63
+ "avg_doclen":171.0756566053,
64
  "RAGatouille":{
65
  "index_config":{
66
  "index_type":"PLAID",
plan.json CHANGED
@@ -37,7 +37,7 @@
37
  "checkpoint": "colbert-ir\/colbertv2.0",
38
  "triples": "\/future\/u\/okhattab\/root\/unit\/experiments\/2021.10\/downstream.distillation.round2.2_score\/round2.nway6.cosine.ib\/examples.64.json",
39
  "collection": [
40
- "list with 5071 elements starting with...",
41
  [
42
  "Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your pet dog in different images. In this paper, we propose a training-free Personalization approach for SAM, termed as PerSAM. Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior, and segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement. In this way, we effectively adapt SAM for private use without any training. To further alleviate the mask ambiguity, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance.",
43
  "Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance. Besides, our approach can also enhance DreamBooth to personalize Stable Diffusion for text-to-image generation, which discards the background disturbance for better target appearance learning. Code is released at https:\/\/github.com\/ZrrSkywalker\/Personalize-SAM",
@@ -50,7 +50,7 @@
50
  "root": ".ragatouille\/",
51
  "experiment": "colbert",
52
  "index_root": null,
53
- "name": "2024-09\/11\/01.44.54",
54
  "rank": 0,
55
  "nranks": 1,
56
  "amp": true,
@@ -59,6 +59,6 @@
59
  },
60
  "num_chunks": 1,
61
  "num_partitions": 8192,
62
- "num_embeddings_est": 868156.9642028809,
63
- "avg_doclen_est": 171.20034790039062
64
  }
 
37
  "checkpoint": "colbert-ir\/colbertv2.0",
38
  "triples": "\/future\/u\/okhattab\/root\/unit\/experiments\/2021.10\/downstream.distillation.round2.2_score\/round2.nway6.cosine.ib\/examples.64.json",
39
  "collection": [
40
+ "list with 5102 elements starting with...",
41
  [
42
  "Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your pet dog in different images. In this paper, we propose a training-free Personalization approach for SAM, termed as PerSAM. Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior, and segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement. In this way, we effectively adapt SAM for private use without any training. To further alleviate the mask ambiguity, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance.",
43
  "Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance. Besides, our approach can also enhance DreamBooth to personalize Stable Diffusion for text-to-image generation, which discards the background disturbance for better target appearance learning. Code is released at https:\/\/github.com\/ZrrSkywalker\/Personalize-SAM",
 
50
  "root": ".ragatouille\/",
51
  "experiment": "colbert",
52
  "index_root": null,
53
+ "name": "2024-09\/12\/12.02.32",
54
  "rank": 0,
55
  "nranks": 1,
56
  "amp": true,
 
59
  },
60
  "num_chunks": 1,
61
  "num_partitions": 8192,
62
+ "num_embeddings_est": 872827.9819946289,
63
+ "avg_doclen_est": 171.07565307617188
64
  }