|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- minhalvp/islamqa |
|
language: |
|
- en |
|
base_model: |
|
- openai-community/gpt2 |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- GPT-2 |
|
- Islamic |
|
- QA |
|
- Fine-tuned |
|
- Text Generation |
|
metrics: |
|
- name: Perplexity |
|
value: 170.31 |
|
type: Perplexity |
|
model-index: |
|
- name: ChadGPT |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: minhalvp/islamqa |
|
type: text |
|
metrics: |
|
- name: Perplexity |
|
value: 170.31 |
|
type: Perplexity |
|
source: |
|
name: Self-evaluated |
|
url: https://huggingface.co/12sciencejnv/FinedGPT |
|
--- |
|
|
|
# ChadGPT |
|
This model is a fine-tuned version of GPT-2 on the `minhalvp/islamqa` dataset, which consists of Islamic question-answer pairs. It is designed for generating answers to Islamic questions. |
|
|
|
## License |
|
Apache 2.0 |
|
|
|
## Datasets |
|
The model is fine-tuned on the `minhalvp/islamqa` dataset from Hugging Face. |
|
|
|
## Language |
|
English |
|
|
|
## Metrics |
|
- **Perplexity**: 170.31 (calculated on the validation set of the minhalvp/islamqa dataset) |
|
- **Training Loss**: 2.3 (final training loss after fine-tuning) |
|
|
|
## Base Model |
|
The model is based on the `gpt2` architecture. |
|
|
|
## Pipeline Tag |
|
text-generation |
|
|
|
## Library Name |
|
transformers |
|
|
|
## Tags |
|
GPT-2, Islamic, QA, Fine-tuned, Text Generation |
|
|
|
## Eval Results |
|
The model achieved a training loss of approximately 2.3 after 3 epochs of fine-tuning. |