Seongyun commited on
Commit
560c4b5
·
verified ·
1 Parent(s): f2d56e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -5
README.md CHANGED
@@ -6,7 +6,19 @@ tags:
6
  model-index:
7
  - name: janus-7b
8
  results: []
 
 
 
9
  ---
 
 
 
 
 
 
 
 
 
10
 
11
  ### Training hyperparameters
12
 
@@ -25,13 +37,9 @@ The following hyperparameters were used during training:
25
  - lr_scheduler_warmup_steps: 10
26
  - num_epochs: 4
27
 
28
- ### Training results
29
- |Model|mf-AlpacaEval|mf-Flask|mf-Koala|mf-MT-Bench|mf-Self-Instruct|Average|
30
- |-----|-------------|--------|--------|-----------|----------------|-------|
31
-
32
  ### Framework versions
33
 
34
  - Transformers 4.40.0.dev0
35
  - Pytorch 2.2.2
36
  - Datasets 2.18.0
37
- - Tokenizers 0.15.0
 
6
  model-index:
7
  - name: janus-7b
8
  results: []
9
+ license: apache-2.0
10
+ language:
11
+ - en
12
  ---
13
+ ## Links for Reference
14
+
15
+ - **Homepage: In Progress**
16
+ - **Repository: https://github.com/kaistAI/Janus**
17
+ - **Paper:**
18
+ - **Point of Contact:[email protected]**
19
+
20
+ # TL; DR
21
+ Janus 7B is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus 7B has been trained on [Multifaceted Collection](), a preference dataset with 192k combinations of values that go beyond generic helpfulness and harmlessness, spanning 65k user instructions. Janus 7B not only excels at generating personalized responses on [Multifaceted Bench]() that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
22
 
23
  ### Training hyperparameters
24
 
 
37
  - lr_scheduler_warmup_steps: 10
38
  - num_epochs: 4
39
 
 
 
 
 
40
  ### Framework versions
41
 
42
  - Transformers 4.40.0.dev0
43
  - Pytorch 2.2.2
44
  - Datasets 2.18.0
45
+ - Tokenizers 0.15.0