Update README.md
Browse files
README.md
CHANGED
@@ -16,25 +16,39 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
# llama3.2-1B-persianQAV2.0
|
18 |
|
19 |
-
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
|
20 |
-
It achieves the following results on the evaluation set:
|
21 |
-
- Loss: 2.4505
|
22 |
-
- Model Preparation Time: 0.003
|
23 |
-
|
24 |
## Model description
|
|
|
25 |
|
26 |
-
More information needed
|
27 |
|
28 |
## Intended uses & limitations
|
29 |
|
30 |
-
|
|
|
|
|
|
|
31 |
|
|
|
32 |
## Training and evaluation data
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
## Training procedure
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|
|
|
16 |
|
17 |
# llama3.2-1B-persianQAV2.0
|
18 |
|
|
|
|
|
|
|
|
|
|
|
19 |
## Model description
|
20 |
+
Persian-QA-LLaMA is a fine-tuned version of LLaMA (1B parameters) optimized for Persian language question answering tasks. The model uses Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning, making it a lightweight adaptation that maintains the base model's capabilities while adding Persian language understanding. This adaptation specifically targets question-answering capabilities, allowing the model to understand and respond to questions in Persian while maintaining computational efficiency. This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on [azizmatin/question_answering](https://huggingface.co/datasets/azizmatin/question_answering) dataset.
|
21 |
|
|
|
22 |
|
23 |
## Intended uses & limitations
|
24 |
|
25 |
+
### Intended Uses
|
26 |
+
Persian language question answering Information extraction from Persian texts Educational support and tutoring systems Research and academic applications
|
27 |
+
|
28 |
+
### Limitations
|
29 |
|
30 |
+
Performance may vary on domain-specific questions Not optimized for other NLP tasks beyond question answering May struggle with highly colloquial or dialectal Persian Limited by the training dataset's coverage and diversity Should not be used for generating factual content without verificatio
|
31 |
## Training and evaluation data
|
32 |
|
33 |
+
The model was trained on a modified version of a Persian question-answering dataset structured similarly to SQuAD v2. The dataset includes:
|
34 |
+
|
35 |
+
- Question-answer pairs in Persian
|
36 |
+
- Contextual passages for answer extraction
|
37 |
+
- Various question types and difficulty levels
|
38 |
+
- Coverage across different topics and domains
|
39 |
+
|
40 |
+
The dataset was split into training, validation, and test sets to ensure robust evaluation of model performance.
|
41 |
|
42 |
## Training procedure
|
43 |
|
44 |
+
### Training Details
|
45 |
+
|
46 |
+
- Base Model: LLaMA 1B
|
47 |
+
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
|
48 |
+
- Training Framework: Hugging Face Transformers with PEFT
|
49 |
+
- Optimization: AdamW optimizer with learning rate scheduling
|
50 |
+
- Training was conducted with parameter-efficient fine-tuning techniques to maintain efficiency
|
51 |
+
|
52 |
### Training hyperparameters
|
53 |
|
54 |
The following hyperparameters were used during training:
|