Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Qwen2.5-3B Fine-Tuned on BBH Dataset - Model Card
|
2 |
+
π Model Overview
|
3 |
+
Model Name: Qwen2.5-3B Fine-Tuned on BBH
|
4 |
+
Base Model: Qwen2.5-3B-Instruct
|
5 |
+
Fine-Tuned Dataset: BBH (BigBench Hard)
|
6 |
+
Task: Causal Language Modeling (CLM)
|
7 |
+
Fine-Tuning Objective: Improve performance on reasoning and knowledge-based multiple-choice tasks
|
8 |
+
|
9 |
+
π Dataset Information
|
10 |
+
The model was fine-tuned on BigBench Hard (BBH), a dataset designed to evaluate complex reasoning tasks.
|
11 |
+
Key subsets used for training:
|
12 |
+
|
13 |
+
Causal Judgement π§ : Evaluating causality understanding
|
14 |
+
Date Understanding π: Temporal reasoning and date manipulation
|
15 |
+
Boolean Expressions β
β: Logical reasoning
|
16 |
+
Dataset characteristics:
|
17 |
+
|
18 |
+
Format: Multiple-choice questions
|
19 |
+
Domains: Logic, Mathematics, Commonsense Reasoning
|
20 |
+
Label Mapping: Answers converted into numerical classes (e.g., A β 0, B β 1, etc.)
|