sempl-it's models
Collection
3 items
•
Updated
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("VerbACxSS/sempl-it-gpt2-small-italian", model_max_length=1024)
model = AutoModelForCausalLM.from_pretrained("VerbACxSS/sempl-it-gpt2-small-italian")
model.eval()
text_to_simplify = 'Nella fattispecie, questo documento è di natura prescrittiva'
prompt = f'### [Input]:\n{text_to_simplify}\n\n###[Output]:\n'
x = tokenizer(prompt, max_length=1024, truncation=True, padding=True, return_tensors='pt').input_ids
y = model.generate(x, max_length=1024)[0]
y_dec = tokenizer.decode(y, max_length=1024, truncation=True)
output = y_dec.split('###[Output]:\n')[1].split('<|endoftext|>')[0].strip()
print(output)
This contribution is a result of the research conducted within the framework of the PRIN 2020 (Progetti di Rilevante Interesse Nazionale) "VerbACxSS: on analytic verbs, complexity, synthetic verbs, and simplification. For accessibility" (Prot. 2020BJKB9M), funded by the Italian Ministero dell'Università e della Ricerca.
Base model
GroNLP/gpt2-small-italian