krum-utsav commited on
Commit
e4c9dfb
·
1 Parent(s): eb4840a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -8,7 +8,9 @@ pipeline_tag: text-generation
8
 
9
  ## Model description
10
 
11
- togethercomputer/RedPajama-INCITE-Base-3B-v1 finetuned for paraphrasing and changing the tone of the input sentence(to casual/professional/witty).
 
 
12
 
13
  Sample training data:
14
  ```json
@@ -49,7 +51,6 @@ Sample training data:
49
 
50
  ## Training procedure
51
 
52
-
53
  The following `bitsandbytes` quantization config was used during training:
54
  - load_in_8bit: False
55
  - load_in_4bit: True
@@ -60,7 +61,7 @@ The following `bitsandbytes` quantization config was used during training:
60
  - bnb_4bit_quant_type: nf4
61
  - bnb_4bit_use_double_quant: True
62
  - bnb_4bit_compute_dtype: bfloat16
63
- ### Framework versions
64
 
 
65
 
66
  - PEFT 0.4.0.dev0
 
8
 
9
  ## Model description
10
 
11
+ The togethercomputer/RedPajama-INCITE-Base-3B-v1 model finetuned for `Paraphrasing` and `Changing the Tone` of the input sentence(to `casual`/`professional`/`witty`).
12
+
13
+ Look at the repo [llm-toys](https://github.com/kuutsav/llm-toys) for usage and other details.
14
 
15
  Sample training data:
16
  ```json
 
51
 
52
  ## Training procedure
53
 
 
54
  The following `bitsandbytes` quantization config was used during training:
55
  - load_in_8bit: False
56
  - load_in_4bit: True
 
61
  - bnb_4bit_quant_type: nf4
62
  - bnb_4bit_use_double_quant: True
63
  - bnb_4bit_compute_dtype: bfloat16
 
64
 
65
+ ### Framework versions
66
 
67
  - PEFT 0.4.0.dev0