LightEmbed/snowflake-arctic-embed-m-onnx
This is the ONNX version of the Sentence Transformers model Snowflake/snowflake-arctic-embed-m for sentence embedding, optimized for speed and lightweight performance. By utilizing onnxruntime and tokenizers instead of heavier libraries like sentence-transformers and transformers, this version ensures a smaller library size and faster execution. Below are the details of the model:
- Base model: Snowflake/snowflake-arctic-embed-m
- Embedding dimension: 768
- Max sequence length: 512
- File size on disk: 0.41 GB
- Pooling incorporated: Yes
This ONNX model consists all components in the original sentence transformer model: Transformer, Pooling, Normalize
Usage (LightEmbed)
Using this model becomes easy when you have LightEmbed installed:
pip install -U light-embed
Then you can use the model using the original model name like this:
from light_embed import TextEmbedding
sentences = [
"This is an example sentence",
"Each sentence is converted"
]
model = TextEmbedding('Snowflake/snowflake-arctic-embed-m')
embeddings = model.encode(sentences)
print(embeddings)
Then you can use the model using onnx model name like this:
from light_embed import TextEmbedding
sentences = [
"This is an example sentence",
"Each sentence is converted"
]
model = TextEmbedding('LightEmbed/snowflake-arctic-embed-m-onnx')
embeddings = model.encode(sentences)
print(embeddings)
Citing & Authors
Binh Nguyen / [email protected]
- Downloads last month
- 2
Inference API (serverless) does not yet support light-embed models for this pipeline type.