invalid-coder commited on
Commit
af46d93
1 Parent(s): 4a70e5a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -0
README.md CHANGED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - berkeley-nest/Nectar
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ tags:
9
+ - reward model
10
+ - RLHF
11
+ - RLAIF
12
+ ---
13
+
14
+ # Starling-LM-7B-beta-laser-dpo
15
+
16
+ It follows the implementation of laserRMT
17
+ @ https://github.com/cognitivecomputations/laserRMT and the novel training
18
+ technique - we partially freeze the model according to a laser-like analysis
19
+ (Official Paper soon) which effectively prevents the significant problem of
20
+ language models forgetting previously acquired knowledge. This aspect is
21
+ particularly crucial when attempting to teach the model specific skills, such as function calling.
22
+
23
+ # Starling-LM-7B-beta
24
+
25
+ <!-- Provide a quick summary of what the model is/does. -->
26
+
27
+ - **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
28
+ - **Model type:** Language Model finetuned with RLHF / RLAIF
29
+ - **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
30
+ - **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
31
+
32
+
33
+
34
+ We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
35
+ Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
36
+
37
+
38
+ ## Uses
39
+
40
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
41
+
42
+ **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
43
+
44
+ Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
45
+ In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
46
+
47
+ The conversation template is the same as Openchat-3.5-0106:
48
+ ```
49
+ import transformers
50
+ tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
51
+
52
+ # Single-turn
53
+ tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
54
+ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
55
+
56
+ # Multi-turn
57
+ tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
58
+ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
59
+
60
+ # Coding Mode
61
+ tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
62
+ assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
63
+ ```
64
+ ## Code Examples
65
+
66
+ ```python
67
+ import transformers
68
+
69
+ tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
70
+ model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
71
+
72
+ def generate_response(prompt):
73
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
74
+ outputs = model.generate(
75
+ input_ids,
76
+ max_length=256,
77
+ pad_token_id=tokenizer.pad_token_id,
78
+ eos_token_id=tokenizer.eos_token_id,
79
+ )
80
+ response_ids = outputs[0]
81
+ response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
82
+ return response_text
83
+
84
+ # Single-turn conversation
85
+ prompt = "Hello, how are you?"
86
+ single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
87
+ response_text = generate_response(single_turn_prompt)
88
+ print("Response:", response_text)
89
+
90
+ ## Multi-turn conversation
91
+ prompt = "Hello"
92
+ follow_up_question = "How are you today?"
93
+ response = ""
94
+ multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
95
+ response_text = generate_response(multi_turn_prompt)
96
+ print("Multi-turn conversation response:", response_text)
97
+
98
+ ### Coding conversation
99
+ prompt = "Implement quicksort using C++"
100
+ coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
101
+ response = generate_response(coding_prompt)
102
+ print("Coding conversation response:", response)
103
+ ```
104
+
105
+ ## License
106
+ The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
107
+
108
+
109
+ ## Acknowledgment
110
+ We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
111
+
112
+ ## Citation
113
+ ```
114
+ @misc{starling2023,
115
+ title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
116
+ url = {},
117
+ author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
118
+ month = {November},
119
+ year = {2023}
120
+ }
121
+ ```