Sentence Similarity
PEFT
File size: 2,392 Bytes
bebfc72
 
a5bf60d
 
9d12ba8
 
bebfc72
41abffd
0987629
41abffd
eb0e691
41abffd
bebfc72
41abffd
a5bf60d
 
bebfc72
 
 
 
6f90b87
41abffd
0987629
 
 
 
 
6f90b87
 
 
 
 
 
 
 
 
41abffd
0987629
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb0e691
 
 
84fc5a1
 
 
2daa4aa
 
84fc5a1
eb0e691
17e53b3
eb0e691
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
library_name: peft
datasets:
- xnli
license: cc-by-nc-4.0
pipeline_tag: sentence-similarity
---
These are LoRA adaption weights for the [mT5](https://huggingface.co/google/mt5-xxl) encoder.

## Multilingual Sentence T5 (m-ST5)
This model is a multilingual extension of Sentence T5 and was created using the [mT5](https://huggingface.co/google/mt5-xxl) encoder. It is proposed in this [paper](https://arxiv.org/abs/2403.17528).
m-ST5 is an encoder for sentence embedding, and its performance has been verified in cross-lingual semantic textual similarity (STS) and sentence retrieval tasks.

### Training Data
The model was trained on the XNLI dataset.

### Framework versions


- PEFT 0.4.0.dev0

## How to use
0. If you have not installed peft, please do so.
```
pip install -q git+https://github.com/huggingface/transformers.git@main git+https://github.com/huggingface/peft.git
```
1. Load the model.
```
from transformers import MT5EncoderModel
from peft import PeftModel

model =  MT5EncoderModel.from_pretrained("google/mt5-xxl")
model.enable_input_require_grads()
model.gradient_checkpointing_enable()
model: PeftModel = PeftModel.from_pretrained(model, "pkshatech/m-ST5")
```
2. To obtain sentence embedding, use mean pooling.
```
tokenizer = AutoTokenizer.from_pretrained("google/mt5-xxl", use_fast=False)
model.eval()

texts = ["I am a dog.","You are a cat."]
inputs = tokenizer(
    texts,
    padding=True,
    truncation=True,
    return_tensors="pt",
)
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
last_hidden_state[inputs.attention_mask == 0, :] = 0
sent_len = inputs.attention_mask.sum(dim=1, keepdim=True)
sent_emb = last_hidden_state.sum(dim=1) / sent_len
```

## BenchMarks
- Tatoeba: Sentence retrieval tasks with pairs of English sentences and sentences in other languages.
- BUCC: Bitext mining task. It consists of English and one of the 4 languages (German, French, Russian and Chinese).
- XSTS: Cross-lingual semantic textual similarity task.


Please check the paper for details and more.

|       | Tatoeba-14 | Tatoeba-36 | BUCC | XSTS<br>(ar-ar)|XSTS<br>(ar-en)|XSTS<br>(es-es)|XSTS<br>(es-en)|XSTS<br>(tr-en)|
| ----- | :----------: | :----------: | :----: | :---:|:----:|:----:|:----:|:----:|
| m-ST5 | 96.3       | 94.7       | 97.6 | 76.2|78.6|84.4|76.2|75.1|
| LaBSE | 95.3       | 95.0       | 93.5 | 69.1|74.5|80.8|65.5|72.0|