File size: 1,631 Bytes
bebfc72 a5bf60d 9d12ba8 bebfc72 52bb5f0 0987629 52bb5f0 0987629 bebfc72 a5bf60d bebfc72 6f90b87 0987629 6f90b87 0987629 a5bf60d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
library_name: peft
datasets:
- xnli
license: cc-by-nc-4.0
pipeline_tag: sentence-similarity
---
These are LoRA adaption weights for [mT5](https://huggingface.co/google/mt5-xxl) encoder.
## Multilingual Sentence T5
This model is a multilingual extension of Sentence T5 and was created using the [mT5](https://huggingface.co/google/mt5-xxl) encoder. It is proposed in this [paper](hoge).
It is an encoder for sentence embedding, and its performance has been verified in cross-lingual STS and sentence retrieval.
### Traning Data
The model was trained on the XNLI dataset.
### Framework versions
- PEFT 0.4.0.dev0
## Hot to use
0. If you have not installed peft, please do so.
```
pip install -q git+https://github.com/huggingface/transformers.git@main git+https://github.com/huggingface/peft.git
```
1. Load the model.
```
from transformers import MT5EncoderModel
from peft import PeftModel
model = MT5EncoderModel.from_pretrained("google/mt5-xxl")
model.enable_input_require_grads()
model.gradient_checkpointing_enable()
model: PeftModel = PeftModel.from_pretrained(model, "pkshatech/m-ST5")
```
2. To obtain sentence embedding, use the mean pooling.
```
tokenizer = AutoTokenizer.from_pretrained("google/mt5-xxl", use_fast=False)
model.eval()
texts = ["I am a dog.","You are a cat."]
inputs = tokenizer(
texts,
padding=True,
truncation=True,
return_tensors="pt",
)
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
last_hidden_state[inputs.attention_mask == 0, :] = 0
sent_len = inputs.attention_mask.sum(dim=1, keepdim=True)
sent_emb = last_hidden_state.sum(dim=1) / sent_len
``` |