File size: 876 Bytes
12a15ee 47a8fc2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
license: apache-2.0
---
This model is compiled for neuronx devices (eg. on inf2 instance).
This original checkpoint is [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5).
## Export
Here below is the command used for exporting this model:
```bash
optimum-cli export neuron -m BAAI/bge-base-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb/
```
## Usage
To use the compiled artifacts for inference, here is an example:
```python
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForSenetenceTransformers
emb_model = NeuronModelForSenetenceTransformers.from_pretrained("optimum/bge-base-en-v1.5-neuronx")
inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")
emb = emb_model(**inputs)
# ["token_embeddings", "sentence_embedding"]
```
|