ChadGPT

This model is a fine-tuned version of GPT-2 on the minhalvp/islamqa dataset, which consists of Islamic question-answer pairs. It is designed for generating answers to Islamic questions.

License

Apache 2.0

Datasets

The model is fine-tuned on the minhalvp/islamqa dataset from Hugging Face.

Language

English

Metrics

  • Perplexity: 170.31 (calculated on the validation set of the minhalvp/islamqa dataset)
  • Training Loss: 2.3 (final training loss after fine-tuning)

Base Model

The model is based on the gpt2 architecture.

Pipeline Tag

text-generation

Library Name

transformers

Tags

GPT-2, Islamic, QA, Fine-tuned, Text Generation

Eval Results

The model achieved a training loss of approximately 2.3 after 3 epochs of fine-tuning.

Downloads last month
113
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for 12sciencejnv/FinedGPT

Finetuned
(1342)
this model
Quantizations
2 models

Dataset used to train 12sciencejnv/FinedGPT

Evaluation results