Crystalcareai
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -44,11 +44,11 @@ To ensure the robustness and effectiveness of Llama-3-SEC, the model has undergo
|
|
44 |
|
45 |
- Domain-specific perplexity, measuring the model's performance on SEC-related data
|
46 |
|
47 |
-
<img src="https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png" width="
|
48 |
|
49 |
- Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets
|
50 |
|
51 |
-
<img src="https://i.ibb.co/2v6PdDx/Screenshot-2024-06-11-at-10-25-03-PM.png" width="
|
52 |
|
53 |
- General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks
|
54 |
|
|
|
44 |
|
45 |
- Domain-specific perplexity, measuring the model's performance on SEC-related data
|
46 |
|
47 |
+
<img src="https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png" width="600">
|
48 |
|
49 |
- Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets
|
50 |
|
51 |
+
<img src="https://i.ibb.co/2v6PdDx/Screenshot-2024-06-11-at-10-25-03-PM.png" width="600">
|
52 |
|
53 |
- General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks
|
54 |
|