metadata
pipeline_tag: text-generation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
This model has been pushed to the Hub using the PytorchModelHubMixin integration:
- Library: https://huggingface.co/Aananda-giri/GPT2-Nepali/
- Docs: [More Information Needed]
GPT-2-Nepali-512 Model
512 represents context length
This repository contains a custom GPT-2 model trained on Nepali text. Follow the instructions below to use this model for text generation.
How to Use the Model
Download the Required Code
Save themodel_code.py
file in the same directory where you'll run the script.Install Required Libraries
Ensure you have the necessary libraries installed:pip install transformers torch
Run the Following Code
Here's an example to load the model and generate text:import torch from model_code import GPTModel, generate_and_print_sample from transformers import PreTrainedTokenizerFast # Load the tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("Aananda-giri/NepaliBPE") # Define the starting text start_context = "रामले भात" # Load the pre-trained model loaded_model = GPTModel.from_pretrained("Aananda-giri/GPT2-Nepali") # Move the model to the appropriate device (CPU or GPU) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") loaded_model.to(device) # Generate text generate_and_print_sample( loaded_model, tokenizer, device, start_context )
Additional Notes
- Tokenizer: The model uses a pre-trained tokenizer available at
Aananda-giri/NepaliBPE
. Ensure this is downloaded and accessible during runtime. - Dependencies: This code requires
transformers
(by Hugging Face) andtorch
(PyTorch). Install them if not already installed. - Device Compatibility: The script automatically detects if a CUDA-enabled GPU is available and utilizes it for faster inference. If not, it defaults to the CPU.
Example Output
Input:
रामले भात
Generated Text:
रामले भात खाएर सन्तोष माने। ऊ आफ्ना साथीहरूसँग रमाइलो गरिरहेको थियो।
Let me know if you'd like further assistance!