Sao10K commited on
Commit
1aad7aa
1 Parent(s): 0cf0388

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -3,13 +3,15 @@ Training Details:
3
  Trained at 8K Context -> Expanded to 32K Context due to context extension with PoSE training.
4
 
5
  Dataset Modifications:
6
- - Further Cleaned up Roleplaying Samples
7
  - Removed Low Quality Samples from Manual Check
8
- - More Creative Writing Samples
9
  - Remade and Refined Detailed Instruct Data
10
 
11
  Needle in a Haystack Results:
 
12
 
 
13
 
14
  ```
15
  sequence_len: 8192
 
3
  Trained at 8K Context -> Expanded to 32K Context due to context extension with PoSE training.
4
 
5
  Dataset Modifications:
6
+ - Further Cleaned up Roleplaying Samples -> Quality Check
7
  - Removed Low Quality Samples from Manual Check
8
+ - More Creative Writing Samples -> 2x
9
  - Remade and Refined Detailed Instruct Data
10
 
11
  Needle in a Haystack Results:
12
+ ![Results](Linkhere)
13
 
14
+ Coherent at 32K Context. Not as good as a natively trained 32K model, but much better than regular rope scaling.
15
 
16
  ```
17
  sequence_len: 8192