mpasila commited on
Commit
4a1dcae
1 Parent(s): bbcc46a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language:
3
- - en
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
@@ -9,15 +9,20 @@ tags:
9
  - llama
10
  - trl
11
  - sft
12
- base_model: mpasila/Viking-33B-1000B
 
 
13
  ---
 
 
 
14
 
15
  # Uploaded model
16
 
17
  - **Developed by:** mpasila
18
  - **License:** apache-2.0
19
- - **Finetuned from model :** mpasila/Viking-33B-1000B
20
 
21
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
  language:
3
+ - fi
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
 
9
  - llama
10
  - trl
11
  - sft
12
+ base_model: LumiOpen/Viking-33B
13
+ datasets:
14
+ - mpasila/Finnish-ShareGPT-Tiny-V1-1
15
  ---
16
+ Uses my [tiny dataset](https://huggingface.co/datasets/mpasila/Finnish-ShareGPT-Tiny-V1-1) to train this bigger variant of Viking model family.
17
+
18
+ This LoRA uses the 1000B checkpoint.
19
 
20
  # Uploaded model
21
 
22
  - **Developed by:** mpasila
23
  - **License:** apache-2.0
24
+ - **Finetuned from model :** LumiOpen/Viking-33B
25
 
26
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
27
 
28
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)