ShengnanAn
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ language:
|
|
14 |
|
15 |
**FILM-7B** is a 32K-context LLM that overcomes the lost-in-the-middle problem.
|
16 |
It is trained from Mistral-7B-Instruct-v0.2 by applying Information-Intensie (In2) Training.
|
17 |
-
FILM-7B achieves SOTA-level performance on real-world long-context tasks among ~7B size LLMs and does not compromise the short-context performance.
|
18 |
|
19 |
## Model Usage
|
20 |
|
@@ -31,6 +31,8 @@ The system tempelate for FILM-7B:
|
|
31 |
|
32 |
## Probing Results
|
33 |
|
|
|
|
|
34 |
<p align="center">
|
35 |
<img src="./figures/probing_results.png" width="800">
|
36 |
<br>
|
|
|
14 |
|
15 |
**FILM-7B** is a 32K-context LLM that overcomes the lost-in-the-middle problem.
|
16 |
It is trained from Mistral-7B-Instruct-v0.2 by applying Information-Intensie (In2) Training.
|
17 |
+
FILM-7B achieves near-perfect performance on probing tasks, SOTA-level performance on real-world long-context tasks among ~7B size LLMs, and does not compromise the short-context performance.
|
18 |
|
19 |
## Model Usage
|
20 |
|
|
|
31 |
|
32 |
## Probing Results
|
33 |
|
34 |
+
To reproduce the results on our VaL Probing, see the guidance in [https://github.com/microsoft/FILM/tree/main/VaLProbing](https://github.com/microsoft/FILM/tree/main/VaLProbing).
|
35 |
+
|
36 |
<p align="center">
|
37 |
<img src="./figures/probing_results.png" width="800">
|
38 |
<br>
|