Edit model card

GPT-Neo Romanian 125M

This model is a GPT-Neo transformer decoder model designed using EleutherAI's replication of the GPT-3 architecture.

It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 5.8M steps on a v3 TPU machine.

from transformers import GPTNeoForCausalLM, GPT2Tokenizer

model = GPTNeoForCausalLM.from_pretrained("iliemihai/gpt-neo-romanian-125m")
tokenizer = GPT2Tokenizer.from_pretrained("iliemihai/gpt-neo-romanian-125m")

prompt = "Cine a fost mihai eminescu"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

output = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=64)
result = tokenizer.decode(output[0], skip_special_tokens=True)

print(result)

Authors:

  • Dumitrescu Stefan
  • Mihai Ilie

Evaluation

Evaluation to be added soon, also on https://github.com/dumitrescustefan/Romanian-Transformers

Acknowledgements

Thanks TPU Research Cloud for the TPUv3 machine needed to train this model!

Downloads last month
38
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for iliemihai/gpt-neo-romanian-125m

Quantizations
1 model