Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("VerbACxSS/sempl-it-gpt2-small-italian", model_max_length=1024)
model = AutoModelForCausalLM.from_pretrained("VerbACxSS/sempl-it-gpt2-small-italian")

model.eval()

text_to_simplify = 'Nella fattispecie, questo documento è di natura prescrittiva'
prompt = f'### [Input]:\n{text_to_simplify}\n\n###[Output]:\n'

x = tokenizer(prompt, max_length=1024, truncation=True, padding=True, return_tensors='pt').input_ids
y = model.generate(x, max_length=1024)[0]
y_dec = tokenizer.decode(y, max_length=1024, truncation=True)
output = y_dec.split('###[Output]:\n')[1].split('<|endoftext|>')[0].strip()

print(output)

Acknowledgements

This contribution is a result of the research conducted within the framework of the PRIN 2020 (Progetti di Rilevante Interesse Nazionale) "VerbACxSS: on analytic verbs, complexity, synthetic verbs, and simplification. For accessibility" (Prot. 2020BJKB9M), funded by the Italian Ministero dell'Università e della Ricerca.

Downloads last month
39
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for VerbACxSS/sempl-it-gpt2-small-italian

Finetuned
(2)
this model

Collection including VerbACxSS/sempl-it-gpt2-small-italian