ChadGPT

This model is a fine-tuned version of GPT-2 on the minhalvp/islamqa dataset, which consists of Islamic question-answer pairs. It is designed for generating answers to Islamic questions.

License

Apache 2.0

Datasets

The model is fine-tuned on the minhalvp/islamqa dataset from Hugging Face.

Language

English

Metrics

  • Perplexity: 170.31 (calculated on the validation set of the minhalvp/islamqa dataset)
  • Training Loss: 2.3 (final training loss after fine-tuning)

Base Model

The model is based on the gpt2 architecture.

Pipeline Tag

text-generation

Library Name

transformers

Tags

GPT-2, Islamic, QA, Fine-tuned, Text Generation

Eval Results

The model achieved a training loss of approximately 2.3 after 3 epochs of fine-tuning.

Downloads last month
72
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 12sciencejnv/FinedGPT

Finetuned
(1268)
this model
Quantizations
2 models

Dataset used to train 12sciencejnv/FinedGPT

Evaluation results