|
--- |
|
|
|
|
|
license: mit |
|
language: |
|
- cs |
|
--- |
|
# Model Card for mt5-base-multi-label-cs-iiib |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. |
|
|
|
## Model Description |
|
|
|
The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance. |
|
|
|
- **Developed by:** Anonymous |
|
- **Language(s):** multilingual |
|
- **Finetuned from:** mt5-base |
|
|
|
## Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** https://github.com/chi2024submission |
|
- **Paper:** Stay tuned! |
|
|
|
## Usage |
|
Here is how to use this model to classify a context-window of a dialogue: |
|
|
|
```python |
|
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer |
|
import torch |
|
|
|
test_texts = ['Utterance2'] |
|
test_text_pairs = ['Utterance1;Utterance2;Utterance3'] |
|
|
|
checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib" |
|
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ |
|
.to("cuda" if torch.cuda.is_available() else "cpu") |
|
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) |
|
|
|
def verbalize_input(text: str, text_pair: str) -> str: |
|
return "Utterance: %s\nContext: %s" % (text, text_pair) |
|
|
|
def predict_one(text, pair): |
|
input_pair = verbalize_input(text, pair) |
|
inputs = tokenizer(input_pair, return_tensors="pt", padding=True, |
|
truncation=True, max_length=256).to(model.device) |
|
outputs = model.generate(**inputs) |
|
decoded = [text.split(",")[0].strip() for text in |
|
tokenizer.batch_decode(outputs, skip_special_tokens=True)] |
|
return decoded |
|
|
|
dec = predict_one(test_texts[0], test_text_pairs[0]) |
|
print(dec) |
|
``` |