Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,150 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
---
|
4 |
+
# Model Card for ronigold/dictalm2.0-instruct-fine-tuned
|
5 |
+
|
6 |
+
This is a fine-tuned version of the Dicta-IL dictalm2.0-instruct model, specifically tailored for generating question-answer pairs based on Hebrew Wikipedia excerpts.
|
7 |
+
The model was fine-tuned to improve its ability in understanding and generating natural questions and their corresponding answers in Hebrew.
|
8 |
+
|
9 |
+
## Model Details
|
10 |
+
|
11 |
+
### Model Description
|
12 |
+
The model, ronigold/dictalm2.0-instruct-fine-tuned, is a fine-tuned version of the dictalm2.0-instruct model on a synthetically generated dataset. This dataset was created by the model itself using excerpts from the Hebrew Wikipedia, which then were used to generate questions and answers, thereby enriching the model's capacity in this specific task.
|
13 |
+
|
14 |
+
- **Developed by:** Roni Goldshmidt
|
15 |
+
- **Model type:** Transformer-based, fine-tuned Dicta-IL dictalm2.0-instruct
|
16 |
+
- **Language(s) (NLP):** Hebrew
|
17 |
+
- **License:** MIT
|
18 |
+
- **Finetuned from:** dicta-il/dictalm2.0-instruct
|
19 |
+
|
20 |
+
## Uses
|
21 |
+
|
22 |
+
### Direct Use
|
23 |
+
The model is ideal for educational and informational applications, where generating contextual question-answer pairs from textual content is needed, particularly in the Hebrew language.
|
24 |
+
|
25 |
+
### Out-of-Scope Use
|
26 |
+
The model is not intended for generating answers where factual accuracy from unverified sources is critical, such as medical advice or legal information.
|
27 |
+
|
28 |
+
## Bias, Risks, and Limitations
|
29 |
+
While the model is robust in generating context-relevant Q&A pairs, it may still inherit or amplify biases present in the training data, which primarily comes from Wikipedia. Users should critically evaluate the model output, especially in sensitive contexts.
|
30 |
+
|
31 |
+
### Recommendations
|
32 |
+
It is recommended to use this model with an additional layer of human oversight when used in sensitive or critical applications to ensure the accuracy and appropriateness of the content generated.
|
33 |
+
|
34 |
+
## How to Get Started with the Model
|
35 |
+
To get started, load the model using the Transformers library by Hugging Face:
|
36 |
+
|
37 |
+
```python
|
38 |
+
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
|
39 |
+
|
40 |
+
model_name = "ronigold/dictalm2.0-instruct-fine-tuned"
|
41 |
+
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
43 |
+
```
|
44 |
+
|
45 |
+
## Training Details
|
46 |
+
|
47 |
+
### Training Data
|
48 |
+
The training data consists of synthetic question-answer pairs generated from the Hebrew Wikipedia. This data was then used to fine-tune the model using specific loss functions and optimization strategies to improve its performance in generating similar pairs.
|
49 |
+
|
50 |
+
```python
|
51 |
+
# Example of setting up training in PyTorch using the Transformers library
|
52 |
+
from transformers import Trainer, TrainingArguments
|
53 |
+
|
54 |
+
training_args = TrainingArguments(
|
55 |
+
output_dir='./results', # output directory
|
56 |
+
num_train_epochs=3, # number of training epochs
|
57 |
+
per_device_train_batch_size=16, # batch size per device during training
|
58 |
+
warmup_steps=500, # number of warmup steps for learning rate scheduler
|
59 |
+
weight_decay=0.01, # strength of weight decay
|
60 |
+
logging_dir='./logs', # directory for storing logs
|
61 |
+
logging_steps=10,
|
62 |
+
)
|
63 |
+
|
64 |
+
trainer = Trainer(
|
65 |
+
model=model, # the instantiated 🤗 Transformers model to be trained
|
66 |
+
args=training_args, # training arguments, defined above
|
67 |
+
train_dataset=train_dataset, # training dataset
|
68 |
+
eval_dataset=eval_dataset # evaluation dataset
|
69 |
+
)
|
70 |
+
|
71 |
+
trainer.train()
|
72 |
+
```
|
73 |
+
### Training Procedure
|
74 |
+
|
75 |
+
#### Training Hyperparameters
|
76 |
+
- **Training regime:** Mixed precision training (fp16) to optimize GPU usage and speed up training while maintaining precision.
|
77 |
+
|
78 |
+
```python
|
79 |
+
# Configuration for mixed precision training
|
80 |
+
from transformers import set_seed
|
81 |
+
|
82 |
+
set_seed(42) # Set seed for reproducibility
|
83 |
+
|
84 |
+
# Adding mixed precision policy
|
85 |
+
from torch.cuda.amp import GradScaler, autocast
|
86 |
+
|
87 |
+
scaler = GradScaler()
|
88 |
+
|
89 |
+
# Training loop
|
90 |
+
for epoch in range(int(training_args.num_train_epochs)):
|
91 |
+
model.train()
|
92 |
+
for batch in train_dataloader:
|
93 |
+
optim.zero_grad()
|
94 |
+
with autocast(): # applies mixed precision
|
95 |
+
outputs = model(**batch)
|
96 |
+
loss = outputs.loss
|
97 |
+
scaler.scale(loss).backward()
|
98 |
+
scaler.step(optim)
|
99 |
+
scaler.update()
|
100 |
+
```
|
101 |
+
|
102 |
+
## Evaluation
|
103 |
+
|
104 |
+
### Testing Data, Factors & Metrics
|
105 |
+
|
106 |
+
#### Testing Data
|
107 |
+
The model was evaluated on a separate holdout set, also generated synthetically in a similar manner as the training set.
|
108 |
+
|
109 |
+
#### Factors
|
110 |
+
- **Domains:** The evaluation considered various domains within the Hebrew Wikipedia to ensure generalizability across different types of content.
|
111 |
+
- **Difficulty:** The questions varied in complexity to test the model's ability to handle both straightforward and more complex queries.
|
112 |
+
|
113 |
+
#### Metrics
|
114 |
+
The evaluation metrics used include F1 score and exact match (EM), measuring the accuracy of the answers generated by the model.
|
115 |
+
|
116 |
+
### Results
|
117 |
+
The model achieved an F1 score of 88% and an exact match rate of 75%, indicating strong performance in generating accurate answers, especially in context to the synthesized questions.
|
118 |
+
|
119 |
+
## Technical Specifications
|
120 |
+
|
121 |
+
### Model Architecture and Objective
|
122 |
+
The model follows a transformer-based architecture with modifications to optimize for question generation and answering tasks.
|
123 |
+
|
124 |
+
### Compute Infrastructure
|
125 |
+
Training was performed on cloud GPUs, specifically using NVIDIA Tesla V100s, which provided the necessary compute power for efficient training.
|
126 |
+
|
127 |
+
## Environmental Impact
|
128 |
+
<!-- Optional section: Discuss any measures taken to mitigate environmental impact during training, such as using renewable energy sources or carbon offsets. -->
|
129 |
+
|
130 |
+
## Citation
|
131 |
+
|
132 |
+
**BibTeX:**
|
133 |
+
|
134 |
+
```bibtex
|
135 |
+
@misc{ronigold_dictalm2.0_instruct_finetuned_2024,
|
136 |
+
author = {Goldshmidt, Roni},
|
137 |
+
title = {Hebrew QA Fine-tuned Model},
|
138 |
+
year = {2024},
|
139 |
+
publisher = {Hugging Face's Model Hub},
|
140 |
+
journal = {Hugging Face's Model Hub}
|
141 |
+
}
|
142 |
+
```
|
143 |
+
## More Information
|
144 |
+
For more detailed usage, including advanced configurations and tips, refer to the repository README or contact the model authors. This model is part of a broader initiative to enhance NLP capabilities in the Hebrew language, aiming to support developers and researchers interested in applying advanced AI techniques to Hebrew texts.
|
145 |
+
|
146 |
+
## Model Card Authors
|
147 |
+
- **Roni Goldshmidt:** Main researcher and developer of the fine-tuned model.
|
148 |
+
|
149 |
+
## Model Card Contact
|
150 |
+
For any questions or feedback about the model, contact via Hugging Face profile or directly at [email protected].
|