Update README.md
Browse files
README.md
CHANGED
@@ -70,9 +70,9 @@ AstroSage-Llama-3.1-8B shows remarkable performance improvements:
|
|
70 |
|
71 |
| Model | Score (%) |
|
72 |
|-------|-----------|
|
73 |
-
|
|
74 |
| GPT-4o | 80.4 |
|
75 |
-
| LLaMA-3-8B |
|
76 |
| Gemma-2-9B | 71.5 |
|
77 |
| Qwen-2.5-7B | 70.4 |
|
78 |
| Yi-1.5-9B | 68.4 |
|
@@ -84,7 +84,7 @@ The model demonstrates:
|
|
84 |
- Outperformance of all 8B parameter models
|
85 |
- Comparable performance to GPT-4o (80.4%)
|
86 |
- ~1000x more cost-effective than proprietary models
|
87 |
-
-
|
88 |
|
89 |
|
90 |
## Training Data
|
|
|
70 |
|
71 |
| Model | Score (%) |
|
72 |
|-------|-----------|
|
73 |
+
| **<span style="color:green">AstroSage-Llama-3.1-8B</span>** | **80.9** |
|
74 |
| GPT-4o | 80.4 |
|
75 |
+
| LLaMA-3.1-8B | 73.7 |
|
76 |
| Gemma-2-9B | 71.5 |
|
77 |
| Qwen-2.5-7B | 70.4 |
|
78 |
| Yi-1.5-9B | 68.4 |
|
|
|
84 |
- Outperformance of all 8B parameter models
|
85 |
- Comparable performance to GPT-4o (80.4%)
|
86 |
- ~1000x more cost-effective than proprietary models
|
87 |
+
- 7 percentage-point improvement over base Llama-3.1-8b model
|
88 |
|
89 |
|
90 |
## Training Data
|