File size: 1,386 Bytes
107ec35
 
 
e05c291
107ec35
e05c291
107ec35
e05c291
107ec35
 
 
e05c291
 
 
 
 
2919ea1
e05c291
 
 
21ff16a
e05c291
 
 
 
 
 
 
 
 
 
 
 
 
fe5ec4a
107ec35
bb722cf
 
 
 
 
 
 
 
 
 
 
 
 
 
8ccb933
 
bb722cf
 
 
 
 
 
 
 
 
 
 
 
 
 
e05c291
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
datasets:
- minhalvp/islamqa
language:
- en
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
library_name: transformers
tags:
- GPT-2
- Islamic
- QA
- Fine-tuned
- Text Generation
metrics:
- name: Perplexity
  value: 170.31
  type: Perplexity
model-index:
- name: ChadGPT
  results:
  - task:
      type: text-generation
    dataset:
      name: minhalvp/islamqa
      type: text
    metrics:
    - name: Perplexity
      value: 170.31
      type: Perplexity
    source:
      name: Self-evaluated
      url: https://huggingface.co/12sciencejnv/FinedGPT
---

# ChadGPT
This model is a fine-tuned version of GPT-2 on the `minhalvp/islamqa` dataset, which consists of Islamic question-answer pairs. It is designed for generating answers to Islamic questions.

## License
Apache 2.0

## Datasets
The model is fine-tuned on the `minhalvp/islamqa` dataset from Hugging Face.

## Language
English

## Metrics
- **Perplexity**: 170.31 (calculated on the validation set of the minhalvp/islamqa dataset)
- **Training Loss**: 2.3 (final training loss after fine-tuning)

## Base Model
The model is based on the `gpt2` architecture.

## Pipeline Tag
text-generation

## Library Name
transformers

## Tags
GPT-2, Islamic, QA, Fine-tuned, Text Generation

## Eval Results
The model achieved a training loss of approximately 2.3 after 3 epochs of fine-tuning.