File size: 4,629 Bytes
0b2eb7a
 
ff85c3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b2eb7a
 
ff85c3e
0b2eb7a
ff85c3e
0b2eb7a
 
 
 
 
ff85c3e
0b2eb7a
ff85c3e
 
 
 
 
0b2eb7a
 
 
 
 
ff85c3e
0b2eb7a
 
 
ff85c3e
0b2eb7a
 
 
ff85c3e
0b2eb7a
ff85c3e
0b2eb7a
ff85c3e
0b2eb7a
ff85c3e
 
0b2eb7a
ff85c3e
 
0b2eb7a
ff85c3e
 
0b2eb7a
ff85c3e
 
 
 
0b2eb7a
 
 
 
 
95cb6ff
0b2eb7a
 
 
 
 
ff85c3e
 
 
 
 
 
0b2eb7a
 
 
 
 
ff85c3e
0b2eb7a
ff85c3e
0b2eb7a
 
 
ff85c3e
0b2eb7a
 
 
ff85c3e
0b2eb7a
ff85c3e
0b2eb7a
ff85c3e
0b2eb7a
 
 
ff85c3e
 
 
 
 
 
 
 
95cb6ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
library_name: transformers
tags: [gpt, hebrew, language-model, pretraining]
license: apache-2.0
datasets:
- oscar-corpus/OSCAR-2301
metrics:
- perplexity
model-index:
- name: HebrewGPT_Base_v1.0
  results:
  - task:
      name: Language Modeling
      type: language-modeling
    dataset:
      name: "OSCAR Hebrew"
      type: oscar-corpus/OSCAR-2301
    metrics:
       - name: Perplexity
         type: perplexity
         value: More Information Needed
---

# HebrewGPT_Base_v1.0

This is the HebrewGPT_Base_v1.0 model, a foundational GPT model for the Hebrew language, pretrained from scratch on the OSCAR Hebrew dataset.

## Model Details

### Model Description

Developed by Hooking AI, this model is the base version of a Hebrew GPT series intended for further fine-tuning and downstream NLP tasks in Hebrew. It serves as a generic foundation for Hebrew language understanding and generation.

- **Developed by:** Hooking AI
- **Model type:** GPT (Generative Pre-trained Transformer)
- **Language(s) (NLP):** Hebrew
- **License:** Apache-2.0
- **Repository:** [hooking-dev/Hebrew_v1.0](https://huggingface.co/hooking-dev/Hebrew_v1.0)

## Uses

### Direct Use

This model can be used directly for tasks that involve understanding or generating Hebrew text, such as conversation modeling, text summarization, and more. It has not been fine-tuned on any downstream tasks and is best suited as a starting point for further NLP applications.

### Out-of-Scope Use

The model is not recommended for use in high-stakes scenarios such as medical diagnosis or legal decision-making due to the lack of domain-specific fine-tuning and potential biases inherent in language models.

## Bias, Risks, and Limitations

The model, like many language models, likely contains biases that are present in the training data. Users should be aware of these potential biases when using the model, especially in sensitive applications. Further research and auditing for bias are recommended before deploying the model in production.

## How to Get Started with the Model

To get started with HebrewGPT_Base_v1.0, load the model using the Transformers library:

```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer

model = GPT2LMHeadModel.from_pretrained("hooking-dev/Hebrew_v1.0")
tokenizer = GPT2Tokenizer.from_pretrained("hooking-dev/Hebrew_v1.0")

# Example text
input_ids = tokenizer.encode("שלום, מה שלומך?", return_tensors="pt")

# Generate text
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Training Details

### Training Data

The model was trained on the OSCAR Hebrew dataset, a large-scale, open corpus consisting of diverse text collected from the web, reflecting common usage of Hebrew in various contexts. For more details on the dataset, see the citations related to OSCAR below.

### Training Procedure

#### Training Hyperparameters

- **Optimizer:** AdamW
- **Learning Rate:** 0.0002
- **Training Epochs:** 2
- **Batch Size:** 16
- **Sequence Length:** 512
- **Warmup Steps:** 500

## Evaluation

### Testing Data, Factors & Metrics

Since this model is a base model and not fine-tuned on specific downstream tasks, standard language modeling metrics such as perplexity were primarily considered during development. Detailed evaluation results will be added as further testing is conducted.

## Technical Specifications

### Model Architecture and Objective

The model uses a standard GPT architecture with 16 transformer layers, 16 attention heads, and a hidden size of 1024.

### Compute Infrastructure

Training was conducted on GPU-accelerated hardware, specifically using NVIDIA Tesla GPUs.

## Citation

If you use this model in your research, please cite it as follows:

**BibTeX:**

```bibtex
@misc{hebrewgpt_base_v1_0,
  title={HebrewGPT Base Model},
  author={Hooking AI},
  howpublished={Hugging Face Model Hub},
  year={2024},
  url={https://huggingface.co/hooking-dev/Hebrew_v1.0}
}

@article{2022arXiv221210440J,
  author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro},
  title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}",
  journal = {arXiv e-prints},
  year = 2022,
  month = dec,
  eid = {arXiv:2212.10440},
  pages = {arXiv:2212.10440},
  doi = {10.48550/arXiv.2212.10440},
  archivePrefix = {arXiv},
  eprint = {2212.10440},
  primaryClass = {cs.CL},
  adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J},
  adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

}