hermeschen1116 commited on
Commit
f86dbf9
1 Parent(s): 2068ebb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -7,7 +7,6 @@ tags:
7
  - trl
8
  - sft
9
  - unsloth
10
- - generated_from_trainer
11
  model-index:
12
  - name: response_generator_for_emotion_chat_bot
13
  results: []
@@ -19,7 +18,7 @@ pipeline_tag: text-generation
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
- # response_generator_for_emotion_chat_bot
23
 
24
  This model is a fine-tuned version of [unsloth/llama-2-7b-bnb-4bit](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit) on [hermeschen1116/daily_dialog_for_RG](https://huggingface.co/datasets/hermeschen1116/daily_dialog_for_RG), self modified version of [daily_dialog](li2017dailydialog/daily_dialog).
25
 
@@ -40,7 +39,12 @@ More information needed
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
 
43
  - learning_rate: 0.0002
 
 
 
 
44
  - train_batch_size: 4
45
  - eval_batch_size: 8
46
  - seed: 42
@@ -48,6 +52,11 @@ The following hyperparameters were used during training:
48
  - lr_scheduler_type: constant
49
  - lr_scheduler_warmup_ratio: 0.03
50
  - num_epochs: 1
 
 
 
 
 
51
 
52
  ### Framework versions
53
 
 
7
  - trl
8
  - sft
9
  - unsloth
 
10
  model-index:
11
  - name: response_generator_for_emotion_chat_bot
12
  results: []
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
  should probably proofread and complete it, then remove this comment. -->
20
 
21
+ # Response Generator for [Emotion Chat Bot](https://github.com/hermeschen1116/chat-bot)
22
 
23
  This model is a fine-tuned version of [unsloth/llama-2-7b-bnb-4bit](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit) on [hermeschen1116/daily_dialog_for_RG](https://huggingface.co/datasets/hermeschen1116/daily_dialog_for_RG), self modified version of [daily_dialog](li2017dailydialog/daily_dialog).
24
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - system_prompt: ""
43
  - learning_rate: 0.0002
44
+ - weight_decay: 0.001
45
+ - max_grad_norm: 0.3
46
+ - warmup_ratio: 0.03
47
+ - max_steps: -1
48
  - train_batch_size: 4
49
  - eval_batch_size: 8
50
  - seed: 42
 
52
  - lr_scheduler_type: constant
53
  - lr_scheduler_warmup_ratio: 0.03
54
  - num_epochs: 1
55
+ - init_lora_weights: true
56
+ - lora_rank: 16
57
+ - lora_alpha: 16
58
+ - lora_dropout: 0.1
59
+ - use_rslora: true
60
 
61
  ### Framework versions
62