Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,23 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
5 |
|
6 |
### Framework versions
|
7 |
|
8 |
|
9 |
- PEFT 0.4.0.dev0
|
10 |
|
11 |
-
##
|
12 |
-
|
|
|
|
|
|
|
|
|
13 |
```
|
14 |
from transformers import MT5EncoderModel
|
15 |
from peft import PeftModel
|
@@ -19,3 +27,21 @@ model.enable_input_require_grads()
|
|
19 |
model.gradient_checkpointing_enable()
|
20 |
model: PeftModel = PeftModel.from_pretrained(model, "pkshatech/m-ST5")
|
21 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
4 |
+
These are LoRA adaption weights for [mT5]<https://huggingface.co/google/mt5-xxl> encoder.
|
5 |
+
|
6 |
+
## Multilingual Sentence T5
|
7 |
+
This model is a multilingual extension of Sentence T5 and was created using the [mT5]<https://huggingface.co/google/mt5-xxl> encoder. It is proposed in this [paper]<hoge>.
|
8 |
+
It is an encoder for sentence embedding, and its performance has been verified in cross-lingual STS and sentence retrieval.
|
9 |
|
10 |
### Framework versions
|
11 |
|
12 |
|
13 |
- PEFT 0.4.0.dev0
|
14 |
|
15 |
+
## Hot to use
|
16 |
+
0. If you have not installed peft, please do so.
|
17 |
+
```
|
18 |
+
pip install -q git+https://github.com/huggingface/transformers.git@main git+https://github.com/huggingface/peft.git
|
19 |
+
```
|
20 |
+
1. Load the model.
|
21 |
```
|
22 |
from transformers import MT5EncoderModel
|
23 |
from peft import PeftModel
|
|
|
27 |
model.gradient_checkpointing_enable()
|
28 |
model: PeftModel = PeftModel.from_pretrained(model, "pkshatech/m-ST5")
|
29 |
```
|
30 |
+
2. To obtain sentence embedding, use the mean pooling.
|
31 |
+
```
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained("google/mt5-xxl", use_fast=False)
|
33 |
+
model.eval()
|
34 |
+
|
35 |
+
texts = ["I am a dog.","You are a cat."]
|
36 |
+
inputs = tokenizer(
|
37 |
+
texts,
|
38 |
+
padding=True,
|
39 |
+
truncation=True,
|
40 |
+
return_tensors="pt",
|
41 |
+
)
|
42 |
+
outputs = model(**inputs)
|
43 |
+
last_hidden_state = outputs.last_hidden_state
|
44 |
+
last_hidden_state[inputs.attention_mask == 0, :] = 0
|
45 |
+
sent_len = inputs.attention_mask.sum(dim=1, keepdim=True)
|
46 |
+
sent_emb = last_hidden_state.sum(dim=1) / sent_len
|
47 |
+
```
|