turkish-gpt2-large / README.md
tkesgin's picture
Update README.md
3a4270d verified
|
raw
history blame
2.05 kB
metadata
license: mit
datasets:
  - uonlp/CulturaX
language:
  - tr
pipeline_tag: text-generation
tags:
  - Turkish
  - turkish
  - gpt2

turkish-gpt2

This is a Turkish GPT-2 large model. GPT-2 is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner. Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly.

Example Usage

from transformers import AutoTokenizer, GPT2LMHeadModel
from transformers import pipeline

model = GPT2LMHeadModel.from_pretrained("ytu-ce-cosmos/turkish-gpt2-large")
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-gpt2-large")

text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
r = text_generator("Teknolojinin gelişimi hayatımızı önemli ölçüde etkiledi. ", max_length=100)
[{'generated_text': 'Teknolojinin gelişimi hayatımızı önemli ölçüde etkiledi. "Sosyal ağ" adını verdiğimiz yeni bir iletişim çağımız oluştu. '}]

Acknowledgments

  • Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
  • Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗

Citation

@article{kesgin2024introducing,
  title={Introducing cosmosGPT: Monolingual Training for Turkish Language Models},
  author={Kesgin, H Toprak and Yuce, M Kaan and Dogan, Eren and Uzun, M Egemen and Uz, Atahan and Seyrek, H Emre and Zeer, Ahmed and Amasyali, M Fatih},
  journal={arXiv preprint arXiv:2404.17336},
  year={2024}
}

### Contact COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department
https://cosmos.yildiz.edu.tr/
[email protected]