C10X commited on
Commit
bfcb318
·
verified ·
1 Parent(s): 7a88a93

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -2,14 +2,14 @@
2
  base_model: Qwen/Qwen3-Reranker-0.6B
3
  library_name: model2vec
4
  license: mit
5
- model_name: m2v_model
6
  tags:
7
  - embeddings
8
  - static-embeddings
9
  - sentence-transformers
10
  ---
11
 
12
- # m2v_model Model Card
13
 
14
  This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the Qwen/Qwen3-Reranker-0.6B(https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
15
 
@@ -32,7 +32,7 @@ Load this model using the `from_pretrained` method:
32
  from model2vec import StaticModel
33
 
34
  # Load a pretrained Model2Vec model
35
- model = StaticModel.from_pretrained("m2v_model")
36
 
37
  # Compute text embeddings
38
  embeddings = model.encode(["Example sentence"])
@@ -46,7 +46,7 @@ You can also use the [Sentence Transformers library](https://github.com/UKPLab/s
46
  from sentence_transformers import SentenceTransformer
47
 
48
  # Load a pretrained Sentence Transformer model
49
- model = SentenceTransformer("m2v_model")
50
 
51
  # Compute text embeddings
52
  embeddings = model.encode(["Example sentence"])
 
2
  base_model: Qwen/Qwen3-Reranker-0.6B
3
  library_name: model2vec
4
  license: mit
5
+ model_name: m2v_model1
6
  tags:
7
  - embeddings
8
  - static-embeddings
9
  - sentence-transformers
10
  ---
11
 
12
+ # m2v_model1 Model Card
13
 
14
  This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the Qwen/Qwen3-Reranker-0.6B(https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
15
 
 
32
  from model2vec import StaticModel
33
 
34
  # Load a pretrained Model2Vec model
35
+ model = StaticModel.from_pretrained("m2v_model1")
36
 
37
  # Compute text embeddings
38
  embeddings = model.encode(["Example sentence"])
 
46
  from sentence_transformers import SentenceTransformer
47
 
48
  # Load a pretrained Sentence Transformer model
49
+ model = SentenceTransformer("m2v_model1")
50
 
51
  # Compute text embeddings
52
  embeddings = model.encode(["Example sentence"])