File size: 2,589 Bytes
fc057ed 9324064 8064b9d 9324064 fc057ed 966e7ce fc057ed 966e7ce fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed 966e7ce fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed 9324064 fc057ed |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
inference:
parameters:
max_new_tokens: 65
license: apache-2.0
language:
- en
metrics:
- bleu
- f1
- perplexity
widget:
- text: "<knowledge> Yes. You need to notify POSTE assicura. <emotion> curious <dialog> <user> Good evening, can you clarify what types of people can't be
insured with Poste Assicura, as stated in document_0? <system> Certainly, according to document_0, people who are or have been
affected by alcoholism, drug addiction, and HIV infection with or without AIDS cannot be insured. <user> What happens if I am
currently a policyholder with Poste Assicura and I develop alcoholism? Do I need to notify them? <intent>"
---
# Model Card for Flan-T5 Large Emotions
<!-- Provide a quick summary of what the model is/does. -->
This is a Flan-T5-large based model finetuned on the POSTE tasks using only the emotion data as additional input signals. For signals that are not available, please add the respective tag nonetheless. Consider the example from the inference API on the right as a starting point.
**Regarding the Output of the inference widget on the right:** I just shows that the model can be load and how the input should look like. The generation might look a bit weird, since it contains the predicted intent, the slots, and the model response, but the inference api removes all special tokens during postprocessing.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Flan-T5-large](https://huggingface.co/google/flan-t5-large)
## How to Get Started with the Model
```python
import torch
from transformers import (AutoModelForSeq2SeqLM,
AutoTokenizer)
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSeq2SeqLM.from_pretrained(model_path,
pad_token_id=tokenizer.eos_token_id).to(torch_device)
input = """<knowledge> <user_emotion> <dialog> <user> Hi, I need help with parcel choice. Can you do that? <intent>"""
input_enc = tokenizer.encode(input,
truncation = True,
return_tensors='pt').to(torch_device))
output = model.generate(model_inputs, generation_config, max_new_tokens=75)
output_dec = tokenizer.decode(output[0])
'''The output should look like <intent> ... <slots> ... <system> ...'''
special_toks = r'<intent>|<slots>|<system>'
special_toks = re.findall(special_toks, output)
splits = re.split(special_toks, output)[1:]
for special_tok, text in zip(special_toks, splits):
if len(text) > 0:
print(f'{special_tok}: {text}')
```
|