zzzlift commited on
Commit
67aebae
1 Parent(s): a582dc5
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -1,6 +1,6 @@
1
- ## bloom-7b-rl
2
 
3
- This is a reinforcement learning enhanced bloom model (bloom-rl), fine-tuned based on bloom-7b (Muennighoff et al.).
4
 
5
  ### Usage
6
 
@@ -10,16 +10,17 @@ If you don't have a good GPU (mem > 20G) then use the code below:
10
  # pip install -q transformers accelerate
11
  from transformers import AutoModelForCausalLM, AutoTokenizer
12
 
13
- checkpoint = "hongyin/bloom-7b-rl"
14
 
15
  tokenizer = AutoTokenizer.from_pretrained(checkpoint)
16
  model = AutoModelForCausalLM.from_pretrained(checkpoint)
17
 
18
- inputs = tokenizer.encode("Translate to Chinese: I love you.", return_tensors="pt")
19
  outputs = model.generate(inputs)
20
  print(tokenizer.decode(outputs[0]))
21
 
22
- Translate to Chinese: I love you. 翻译:我爱你
 
23
  ```
24
 
25
  If you have a good GPU (mem > 20G) then use the code below:
@@ -28,16 +29,17 @@ If you have a good GPU (mem > 20G) then use the code below:
28
  # pip install -q transformers accelerate
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
- checkpoint = "hongyin/bloom-7b-rl"
32
 
33
  tokenizer = AutoTokenizer.from_pretrained(checkpoint)
34
  model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
35
 
36
- inputs = tokenizer.encode("Translate to Chinese: I love you.", return_tensors="pt").to("cuda")
37
  outputs = model.generate(inputs)
38
  print(tokenizer.decode(outputs[0]))
39
 
40
- Translate to Chinese: I love you. 翻译:我爱你
 
41
  ```
42
 
43
  ## Bibtex entry and citation info
 
1
+ ## chatbloom-7b
2
 
3
+ This is a RLHF enhanced bloom model (chatbloom), fine-tuned based on bloom-7b (Muennighoff et al.). This model only uses English QA datasets for RLHF training, which improves the understanding and generation of English.
4
 
5
  ### Usage
6
 
 
10
  # pip install -q transformers accelerate
11
  from transformers import AutoModelForCausalLM, AutoTokenizer
12
 
13
+ checkpoint = "hongyin/chatbloom-7b"
14
 
15
  tokenizer = AutoTokenizer.from_pretrained(checkpoint)
16
  model = AutoModelForCausalLM.from_pretrained(checkpoint)
17
 
18
+ inputs = tokenizer.encode("Paraphrasing the text: I love you.", return_tensors="pt")
19
  outputs = model.generate(inputs)
20
  print(tokenizer.decode(outputs[0]))
21
 
22
+ Original ouput: Paraphrasing the text: I love you. I love you. I love you. I love
23
+ ChatBloom ouput: Paraphrasing the text: I love you. I am a good person.
24
  ```
25
 
26
  If you have a good GPU (mem > 20G) then use the code below:
 
29
  # pip install -q transformers accelerate
30
  from transformers import AutoModelForCausalLM, AutoTokenizer
31
 
32
+ checkpoint = "hongyin/chatbloom-7b"
33
 
34
  tokenizer = AutoTokenizer.from_pretrained(checkpoint)
35
  model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
36
 
37
+ inputs = tokenizer.encode("Paraphrasing the text: I love you.", return_tensors="pt").to("cuda")
38
  outputs = model.generate(inputs)
39
  print(tokenizer.decode(outputs[0]))
40
 
41
+ Original ouput: Paraphrasing the text: I love you. I love you. I love you. I love
42
+ ChatBloom ouput: Paraphrasing the text: I love you. I am a good person.
43
  ```
44
 
45
  ## Bibtex entry and citation info