Text Generation
Transformers
Safetensors
English
Russian
mistral
conversational
text-generation-inference
MexIvanov commited on
Commit
046d180
·
1 Parent(s): 99dd58f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md CHANGED
@@ -1,3 +1,112 @@
1
  ---
 
 
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ base_model: HuggingFaceH4/zephyr-7b-beta
4
  license: mit
5
+ language:
6
+ - ru
7
+ - en
8
+ tags:
9
+ - python
10
+ - code
11
+ pipeline_tag: conversational
12
  ---
13
+
14
+ # Model Card for Model ID
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+
27
+
28
+ - **Developed by:** C.B. Pronin, A.V. Volosova, A.V. Ostroukh, Yu.N. Strogov, V.V. Kurbatov, A.S. Umarova.
29
+ - **Model type:** Base model HuggingFaceH4/zephyr-7b-beta merged with LoRA (Peft) adapter model MexIvanov/zephyr-python-ru trained on a mix of publicly available data and machine-translated synthetic python coding datasets.
30
+ - **Language(s) (NLP):** Russian, English, Python
31
+ - **License:** MIT
32
+ - **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta
33
+
34
+ ### Model Sources
35
+
36
+ <!-- Provide the basic links for the model. -->
37
+
38
+ - **Repository:** Comming soon...
39
+ - **Paper:** Comming soon...
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+ An experimental finetune of Zephyr-7b-beta, aimed at improving coding performance and support for coding-related instructions written in Russian language.
45
+
46
+ ### Direct Use
47
+
48
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
49
+
50
+ Instruction-based coding in Python, based of instructions written in natural language (English or Russian)
51
+
52
+ Prompt template - Zephyr:
53
+ ```
54
+ <|system|>
55
+ </s>
56
+ <|user|>
57
+ {prompt}</s>
58
+ <|assistant|>
59
+ ```
60
+
61
+ ## Bias, Risks, and Limitations
62
+
63
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
+ This adapter model is intended (but not limited) for research usage only. It was trained on a code based instruction set and it does not have any moderation mechanisms. Use at your own risk, we are not responsible for any usage or output of this model.
65
+
66
+ Quote from Zephyr (base-model) repository: "Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this."
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+
75
+ ## Training procedure
76
+
77
+
78
+ The following `bitsandbytes` quantization config was used during training:
79
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
80
+ - load_in_8bit: False
81
+ - load_in_4bit: True
82
+ - llm_int8_threshold: 6.0
83
+ - llm_int8_skip_modules: None
84
+ - llm_int8_enable_fp32_cpu_offload: False
85
+ - llm_int8_has_fp16_weight: False
86
+ - bnb_4bit_quant_type: nf4
87
+ - bnb_4bit_use_double_quant: False
88
+ - bnb_4bit_compute_dtype: float16
89
+
90
+ ### Framework versions
91
+
92
+
93
+ - PEFT 0.6.2
94
+ ## Training procedure
95
+
96
+
97
+ The following `bitsandbytes` quantization config was used during training:
98
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
99
+ - load_in_8bit: False
100
+ - load_in_4bit: True
101
+ - llm_int8_threshold: 6.0
102
+ - llm_int8_skip_modules: None
103
+ - llm_int8_enable_fp32_cpu_offload: False
104
+ - llm_int8_has_fp16_weight: False
105
+ - bnb_4bit_quant_type: nf4
106
+ - bnb_4bit_use_double_quant: False
107
+ - bnb_4bit_compute_dtype: float16
108
+
109
+ ### Framework versions
110
+
111
+
112
+ - PEFT 0.6.2