File size: 4,315 Bytes
5704874
8043f75
 
5704874
 
8043f75
 
5704874
8043f75
 
 
 
 
5704874
8043f75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bbe5db7
 
 
 
8043f75
 
bbe5db7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
---
language:
  - en
license: apache-2.0
datasets:
  - rajpurkar/squad
  - ehovy/race
metrics:
  - accuracy
  - bleu
  - rouge
base_model: meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: question-answering
tags:
  - qa
  - llm
  - exam
  - multiple-choice
model-index:
  - name: Llama-3B-QA-Enhanced
    results:
      - task:
          type: question-answering
          name: Multiple Choice Question Generation
        dataset:
          name: RACE
          type: ehovy/race
        metrics:
          - name: accuracy
            type: accuracy
            value: 0.85  # Replace with actual value
          - name: bleu
            type: bleu
            value: 0.76  # Replace with actual value
          - name: rouge
            type: rouge
            value: 0.82  # Replace with actual value
---

# Model Card for Llama-3B-QA-Enhanced

# Model Card for Llama-3B-QA-Enhanced

This model is a fine-tuned version of Llama 3B, optimized for generating high-quality multiple-choice questions (MCQs) from input text. It combines the powerful language understanding capabilities of Llama with specialized training for educational content generation.

## Model Details

### Model Description

This model is designed to automatically generate multiple-choice questions from input text, making it particularly useful for educators, content creators, and educational technology platforms.

- **Developed by:** Ahmed Othman
- **Model type:** Fine-tuned Language Model
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** meta-llama/Llama-3.2-3B-Instruct

## Uses

### Direct Use

The model can be used directly for:
- Generating multiple-choice questions from educational texts
- Creating assessment materials
- Automated quiz generation
- Educational content development

```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model = AutoModelForSeq2SeqLM.from_pretrained("AhmedOthman/Llama-3B-QA-Enhanced")
tokenizer = AutoTokenizer.from_pretrained("AhmedOthman/Llama-3B-QA-Enhanced")

text = "Your input text here"
inputs = tokenizer(text, return_tensors="pt", max_length=512, truncation=True)
outputs = model.generate(inputs.input_ids)
mcq = tokenizer.decode(outputs[0], skip_special_tokens=True)
```

### Out-of-Scope Use

This model should not be used for:
- Generating factually incorrect or misleading questions
- Creating questions about sensitive or controversial topics
- Replacing human expertise in high-stakes assessment development

## Training Details

### Training Data

The model was trained on a combination of:
- SQuAD (Stanford Question Answering Dataset)
- RACE (ReAding Comprehension from Examinations)

### Training Procedure

#### Training Hyperparameters

- **Training regime:** fp16 mixed precision
- **Maximum sequence length:** 512 tokens
- **Learning rate:** 2e-5
- **Batch size:** 16
- **Number of epochs:** 3

## Evaluation

### Metrics

The model was evaluated using:
- BLEU score for question generation quality
- ROUGE score for answer relevance
- Accuracy of generated distractors
- Human evaluation for question quality

## Limitations and Bias

- Limited to English language content
- May generate simpler questions for complex topics
- Performance varies with input text quality
- May reflect biases present in training data

## Environmental Impact

- **Base Model:** Llama 3B
- **Fine-tuning Hardware:** Single A100 GPU
- **Training Time:** Approximately 8 hours

## Citation

If you use this model in your research, please cite:

```bibtex
@misc{othman2024llama3bqa,
  author = {Othman, Ahmed},
  title = {Llama-3B-QA-Enhanced},
  year = {2024},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/AhmedOthman/Llama-3B-QA-Enhanced}}
}
```

## Model Card Contact

For questions or issues, please contact Ahmed Othman through the HuggingFace model repository.
```

This model card provides comprehensive information about your fine-tuned model while maintaining a professional and informative tone. You can further customize it by:

1. Adding specific performance metrics from your evaluation
2. Including more example outputs
3. Detailing any specific preprocessing steps used
4. Adding links to related research or projects

Would you like me to modify any particular section or add more specific details?