AIFunOver commited on
Commit
f17d8c9
1 Parent(s): f709280

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/all-MiniLM-L6-v2
3
+ datasets:
4
+ - s2orc
5
+ - flax-sentence-embeddings/stackexchange_xml
6
+ - ms_marco
7
+ - gooaq
8
+ - yahoo_answers_topics
9
+ - code_search_net
10
+ - search_qa
11
+ - eli5
12
+ - snli
13
+ - multi_nli
14
+ - wikihow
15
+ - natural_questions
16
+ - trivia_qa
17
+ - embedding-data/sentence-compression
18
+ - embedding-data/flickr30k-captions
19
+ - embedding-data/altlex
20
+ - embedding-data/simple-wiki
21
+ - embedding-data/QQP
22
+ - embedding-data/SPECTER
23
+ - embedding-data/PAQ_pairs
24
+ - embedding-data/WikiAnswers
25
+ language: en
26
+ library_name: sentence-transformers
27
+ license: apache-2.0
28
+ pipeline_tag: sentence-similarity
29
+ tags:
30
+ - sentence-transformers
31
+ - feature-extraction
32
+ - sentence-similarity
33
+ - transformers
34
+ - openvino
35
+ - nncf
36
+ - fp16
37
+ ---
38
+
39
+ This model is a quantized version of [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
40
+ First make sure you have `optimum-intel` installed:
41
+ ```bash
42
+ pip install optimum[openvino]
43
+ ```
44
+ To load your model you can do as follows:
45
+ ```python
46
+ from optimum.intel import OVModelForFeatureExtraction
47
+ model_id = "AIFunOver/all-MiniLM-L6-v2-openvino-fp16"
48
+ model = OVModelForFeatureExtraction.from_pretrained(model_id)
49
+ ```