|
--- |
|
pipeline_tag: sentence-similarity |
|
tags: |
|
- finetuner |
|
- mteb |
|
- sentence-transformers |
|
- feature-extraction |
|
- sentence-similarity |
|
- alibi |
|
datasets: |
|
- allenai/c4 |
|
language: en |
|
license: apache-2.0 |
|
model-index: |
|
- name: jina-embedding-b-en-v2 |
|
results: [] |
|
--- |
|
<!-- TODO: add evaluation results here --> |
|
<br><br> |
|
|
|
<p align="center"> |
|
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> |
|
</p> |
|
|
|
|
|
<p align="center"> |
|
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b> |
|
</p> |
|
|
|
|
|
## Intended Usage & Model Info |
|
|
|
`jina-embedding-b-en-v2` is an English, monolingual embedding model supporting 8k sequence length. |
|
It is based on a Bert architecture that supports the symmetric bidirectional variant of ALiBi to support longer sequence length. |
|
The backbone Jina Bert Small model is pretrained on the C4 dataset. |
|
The model is further trained on Jina AI's collection of more than 40 datasets of sentence pairs and hard negatives. |
|
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. |
|
|
|
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length thanks to ALiBi. |
|
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search,... |
|
|
|
This model has 33 million parameters, which enables lightning-fast and memory efficient inference on long documents, while still delivering impressive performance. |
|
Additionally, we provide the following embedding models, supporting 8k sequence length as well: |
|
|
|
- [`jina-embedding-s-en-v2`](https://huggingface.co/jinaai/jina-embedding-s-en-v2): 33 million parameters. |
|
- [`jina-embedding-b-en-v2`](https://huggingface.co/jinaai/jina-embedding-b-en-v2): 137 million parameters **(you are here)**. |
|
- [`jina-embedding-l-en-v2`](https://huggingface.co/jinaai/jina-embedding-l-en-v2): 435 million parameters. |
|
|
|
## Data & Parameters |
|
<!-- TODO: update the paper ID once it is published on arxiv --> |
|
Please checkout our [technical blog](https://arxiv.org/abs/2307.11224). |
|
|
|
## Metrics |
|
|
|
We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI: |
|
|
|
<!-- TODO: add evaluation table here --> |
|
|
|
## Usage |
|
|
|
You can use Jina Embedding models directly from transformers package: |
|
```python |
|
!pip install transformers |
|
from transformers import AutoModel |
|
from numpy.linalg import norm |
|
|
|
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) |
|
model = AutoModel.from_pretrained('jinaai/jina-embedding-b-en-v2', trust_remote_code=True) # trust_remote_code is needed to use the encode method |
|
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?']) |
|
print(cos_sim(embeddings[0], embeddings[1])) |
|
``` |
|
|
|
For long sequences, it's recommended to perform inference using Flash Attention. Using Flash Attention allows you to increase the batch size and throughput for long sequence length. |
|
We include an experimental implementation for Flash Attention, shipped with the model. |
|
Install the following triton version: |
|
`pip install triton==2.0.0.dev20221202`. |
|
Now run the same code above, but make sure to set the parameter `with_flash` to `True` when you load the model. You also have to use either `fp16` or `bf16`: |
|
```python |
|
from transformers import AutoModel |
|
from numpy.linalg import norm |
|
import torch |
|
|
|
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) |
|
model = AutoModel.from_pretrained('jinaai/jina-embedding-b-en-v2', trust_remote_code=True, with_flash=True, torch_dtype=torch.float16).cuda() # trust_remote_code is needed to use the encode method |
|
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?']) |
|
print(cos_sim(embeddings[0], embeddings[1])) |
|
``` |
|
|
|
## Fine-tuning |
|
|
|
Please consider [Finetuner](https://github.com/jina-ai/finetuner). |
|
|
|
## Plans |
|
The development of new multilingual models is currently underway. We will be targeting mainly the German and Spanish languages. The upcoming models will be called `jina-embedding-s/b/l-de/es-v2`. |
|
|
|
## Contact |
|
|
|
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. |
|
|
|
## Citation |
|
|
|
If you find Jina Embeddings useful in your research, please cite the following paper: |
|
|
|
<!-- TODO: update the paper ID once it is published on arxiv --> |
|
``` latex |
|
@misc{günther2023jina, |
|
title={Beyond the 512-Token Barrier: Training General-Purpose Text |
|
Embeddings for Large Documents}, |
|
author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang}, |
|
year={2023}, |
|
eprint={2307.11224}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |