Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,88 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
---
|
5 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
6 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
7 |
+
{}
|
8 |
+
---
|
9 |
+
|
10 |
+
# intel-optimized-model-for-embeddings-v1
|
11 |
+
|
12 |
+
This is a text embedding model model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. For sample code that uses this model in a torch serve container see [Intel-Optimized-Container-for-Embeddings](https://github.com/intel/Intel-Optimized-Container-for-Embeddings).
|
13 |
+
|
14 |
+
## Usage
|
15 |
+
|
16 |
+
Install the required packages:
|
17 |
+
```
|
18 |
+
pip install -U torch==2.3.1+cpu --extra-index-url https://download.pytorch.org/whl/cpu
|
19 |
+
pip install -U transformers==4.42.4 intel-extension-for-pytorch==2.3.100
|
20 |
+
```
|
21 |
+
|
22 |
+
Use the following example below to load the model with the transformers library, tokenize the text, run the model, and apply pooling to the output.
|
23 |
+
|
24 |
+
```
|
25 |
+
# example embedding code
|
26 |
+
import torch
|
27 |
+
from transformers import AutoTokenizer, AutoModel
|
28 |
+
import intel_extension_for_pytorch as ipex
|
29 |
+
|
30 |
+
# load model
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained('Intel/intel-optimized-model-for-embeddings-v1')
|
32 |
+
model = AutoModel.from_pretrained('Intel/intel-optimized-model-for-embeddings-v1', torchscript=True)
|
33 |
+
model.eval()
|
34 |
+
|
35 |
+
# do IPEX optimization
|
36 |
+
batch_size = 1
|
37 |
+
seq_length=512
|
38 |
+
vocab_size = model.config.vocab_size
|
39 |
+
sample_input = {"input_ids": torch.randint(vocab_size, size=[batch_size, seq_length]),
|
40 |
+
"token_type_ids": torch.zeros(size=[batch_size, seq_length],
|
41 |
+
dtype=torch.int),
|
42 |
+
"attention_mask": torch.randint(1, size=[batch_size, seq_length])}
|
43 |
+
text = "This is a test."
|
44 |
+
model = ipex.optimize(model, level="O1",auto_kernel_selection=True,
|
45 |
+
conv_bn_folding=False, dtype=torch.bfloat16)
|
46 |
+
|
47 |
+
with torch.no_grad(), torch.cpu.amp.autocast(cache_enabled=False,
|
48 |
+
dtype=torch.bfloat16):
|
49 |
+
# Compile model
|
50 |
+
model = torch.jit.trace(model, example_kwarg_inputs=sample_input,
|
51 |
+
check_trace=False, strict=False)
|
52 |
+
model = torch.jit.freeze(model)
|
53 |
+
|
54 |
+
# Call model
|
55 |
+
tokenized_text = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
|
56 |
+
model_output = model(**tokenized_text)
|
57 |
+
|
58 |
+
# Do mean pooling
|
59 |
+
token_embeddings = model_output[0]
|
60 |
+
attention_mask = tokenized_text['attention_mask']
|
61 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
62 |
+
output_sum = torch.sum(token_embeddings * input_mask_expanded, 1)
|
63 |
+
embeddings = output_sum / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
64 |
+
embeddings = [embeddings[0].tolist()]
|
65 |
+
|
66 |
+
# Embeddings output
|
67 |
+
print(embeddings)
|
68 |
+
```
|
69 |
+
|
70 |
+
## Model Details
|
71 |
+
|
72 |
+
### Model Description
|
73 |
+
|
74 |
+
This model was fine-tuned using the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library based on the [BERT-Medium_L-8_H-512_A-8](https://huggingface.co/nreimers/BERT-Medium_L-8_H-512_A-8) model.
|
75 |
+
|
76 |
+
|
77 |
+
### Training Datasets
|
78 |
+
|
79 |
+
| Dataset | Description | License |
|
80 |
+
| ------------- |:-------------:| -----:|
|
81 |
+
| beir/dbpedia-entity | DBpedia-Entity is a standard test collection for entity search over the DBpedia knowledge base. | CC BY-SA 3.0 license |
|
82 |
+
| beir/nq | To help spur development in open-domain question answering, the Natural Questions (NQ) corpus has been created, along with a challenge website based on this data. | CC BY-SA 3.0 license |
|
83 |
+
| beir/scidocs | SciDocs is a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. | GNU General Public License v3.0 license |
|
84 |
+
| beir/trec-covid | TREC-COVID followed the TREC model for building IR test collections through community evaluations of search systems. | CC-BY-SA-4.0 license |
|
85 |
+
| beir/touche2020 | Given a question on a controversial topic, retrieve relevant arguments from a focused crawl of online debate portals. | CC BY 4.0 license |
|
86 |
+
| WikiAnswers | The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. | MIT |
|
87 |
+
| Cohere/wikipedia-22-12-en-embeddings Dataset | The Cohere/Wikipedia dataset is a processed version of the wikipedia-22-12 dataset. It is English only, and the articles are broken up into paragraphs. | Apache 2.0 |
|
88 |
+
| MLNI | GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. | MIT |
|