lily-knab commited on
Commit
a235477
·
1 Parent(s): fdf60ed

upload sarashina2

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +3 -0
  2. README.md +136 -0
  3. model.embed_tokens.weight +3 -0
  4. model.layers.0.input_layernorm.weight +0 -0
  5. model.layers.0.post_attention_layernorm.weight +0 -0
  6. model.layers.1.input_layernorm.weight +0 -0
  7. model.layers.1.post_attention_layernorm.weight +0 -0
  8. model.layers.10.input_layernorm.weight +0 -0
  9. model.layers.10.post_attention_layernorm.weight +0 -0
  10. model.layers.11.input_layernorm.weight +0 -0
  11. model.layers.11.post_attention_layernorm.weight +0 -0
  12. model.layers.12.input_layernorm.weight +0 -0
  13. model.layers.12.post_attention_layernorm.weight +0 -0
  14. model.layers.13.input_layernorm.weight +0 -0
  15. model.layers.13.post_attention_layernorm.weight +0 -0
  16. model.layers.14.input_layernorm.weight +0 -0
  17. model.layers.14.post_attention_layernorm.weight +0 -0
  18. model.layers.15.input_layernorm.weight +0 -0
  19. model.layers.15.post_attention_layernorm.weight +0 -0
  20. model.layers.16.input_layernorm.weight +0 -0
  21. model.layers.16.post_attention_layernorm.weight +0 -0
  22. model.layers.17.input_layernorm.weight +0 -0
  23. model.layers.17.post_attention_layernorm.weight +0 -0
  24. model.layers.18.input_layernorm.weight +0 -0
  25. model.layers.18.post_attention_layernorm.weight +0 -0
  26. model.layers.19.input_layernorm.weight +0 -0
  27. model.layers.19.post_attention_layernorm.weight +0 -0
  28. model.layers.2.input_layernorm.weight +0 -0
  29. model.layers.2.post_attention_layernorm.weight +0 -0
  30. model.layers.20.input_layernorm.weight +0 -0
  31. model.layers.20.post_attention_layernorm.weight +0 -0
  32. model.layers.21.input_layernorm.weight +0 -0
  33. model.layers.21.post_attention_layernorm.weight +0 -0
  34. model.layers.22.input_layernorm.weight +0 -0
  35. model.layers.22.post_attention_layernorm.weight +0 -0
  36. model.layers.23.input_layernorm.weight +0 -0
  37. model.layers.23.post_attention_layernorm.weight +0 -0
  38. model.layers.24.input_layernorm.weight +0 -0
  39. model.layers.24.post_attention_layernorm.weight +0 -0
  40. model.layers.25.input_layernorm.weight +0 -0
  41. model.layers.25.post_attention_layernorm.weight +0 -0
  42. model.layers.26.input_layernorm.weight +0 -0
  43. model.layers.26.post_attention_layernorm.weight +0 -0
  44. model.layers.27.input_layernorm.weight +0 -0
  45. model.layers.27.post_attention_layernorm.weight +0 -0
  46. model.layers.28.input_layernorm.weight +0 -0
  47. model.layers.28.post_attention_layernorm.weight +0 -0
  48. model.layers.29.input_layernorm.weight +0 -0
  49. model.layers.29.post_attention_layernorm.weight +0 -0
  50. model.layers.3.input_layernorm.weight +0 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.embed_tokens.weight filter=lfs diff=lfs merge=lfs -text
37
+ onnx* filter=lfs diff=lfs merge=lfs -text
38
+ model.onnx filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - sbintuitions/sarashina2-7b
4
+ ---
5
+
6
+
7
+ # Sarashina2 7B with Key-Value-Cache enabled in ONNX fp16 format
8
+ - Model creator: [SB Intuitions](https://huggingface.co/sbintuitions)
9
+ - Original model: [SB Intuitions Sarashina2 7B](https://huggingface.co/sbintuitions/sarashina2-7b)
10
+
11
+ <!-- description start -->
12
+ ## Description
13
+
14
+ This repo contains the ONNX files for the ONNX conversion of Sarashina2 7B done by Esperanto Technologies.
15
+ The model is in the fp16 format and has the KVC enabled.
16
+
17
+ <!-- description end -->
18
+
19
+ ## How to download ONNX model and weight files
20
+
21
+ The easiest way to obtain the model is to clone this whole repo.
22
+ Alternatively you can download the files is using the `huggingface-hub` Python library.
23
+
24
+ ```shell
25
+ pip3 install huggingface-hub>=0.17.1
26
+ ```
27
+
28
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
29
+
30
+ ```shell
31
+ huggingface-cli download Esperanto/sarashina2-7b-kvc-fp16-onnx --local-dir sarashina2-7b-kvc-fp16-onnx --local-dir-use-symlinks False
32
+ ```
33
+
34
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
35
+
36
+ ## How to run from Python code using ONNXRuntime
37
+
38
+ This model can easily be ran in a CPU using [ONNXRuntime](https://onnxruntime.ai/).
39
+
40
+ #### First install the packages
41
+
42
+ ```bash
43
+ pip3 install onnx==1.16.1
44
+ pip3 install onnxruntime==1.17.1
45
+ ```
46
+
47
+ #### Example code: generate text with this model
48
+
49
+ We define the loop with greedy decoding:
50
+ ```python
51
+ import numpy as np
52
+ import onnxruntime
53
+ import onnx
54
+ from transformers import AutoTokenizer
55
+
56
+ def generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context):
57
+ model = onnx.load(model_path)
58
+
59
+ #we create the inputs for the first iteration
60
+ input_tensor = tokenizer(prompt, return_tensors="pt")
61
+ prompt_size = len(input_tensor['input_ids'][0])
62
+ actual_input = input_tensor['input_ids']
63
+ if prompt_size < window:
64
+ actual_input = np.concatenate((tokenizer.bos_token_id*np.ones([1, window - prompt_size], dtype = 'int64'),
65
+ actual_input), axis=1)
66
+ if prompt_size + max_gen_tokens > total_sequence:
67
+ print("ERROR: Longer total sequence is needed!")
68
+ return
69
+ first_attention = np.concatenate((np.zeros([1, total_sequence - window], dtype = 'int64'),
70
+ np.ones((1, window), dtype = 'int64')), axis=1)
71
+ max_gen_tokens += prompt_size #we need to generate on top of parsing the prompt
72
+ inputs_names =[node.name for node in model.graph.input]
73
+ output_names =[node.name for node in model.graph.output]
74
+ inputs_dict = {}
75
+ inputs_dict['input_ids'] = actual_input[:, :window].reshape(1, window).numpy()
76
+ inputs_dict['attention_mask'] = first_attention
77
+ for name in inputs_names:
78
+ if name == 'input_ids' or name == 'attention_mask': continue
79
+ inputs_dict[name] = np.zeros([1, context-window, 128], dtype="float16")
80
+ index = 0
81
+ new_token = np.array([10])
82
+ next_index = window
83
+ old_j = 0
84
+ total_input = actual_input.numpy()
85
+
86
+ rt_session = onnxruntime.InferenceSession(model_path)
87
+ ## We run the inferences
88
+ while next_index < max_gen_tokens:
89
+ if new_token.any() == tokenizer.eos_token_id:
90
+ break
91
+ #inference
92
+ output = rt_session.run(output_names, inputs_dict)
93
+ outs_dictionary = {name: content for (name, content) in zip (output_names, output)}
94
+ #we prepare the inputs for the next inference
95
+ for name in inputs_names:
96
+ if name == 'input_ids':
97
+ old_j = next_index
98
+ if next_index < prompt_size:
99
+ if prompt_size - next_index >= window: next_index += window
100
+ else: next_index = prompt_size
101
+ j = next_index - window
102
+ else:
103
+ next_index +=1
104
+ j = next_index - window
105
+ new_token = outs_dictionary['logits'].argmax(-1).reshape(1, window)
106
+ total_input = np.concatenate((total_input, new_token[: , -1:]), axis = 1)
107
+ inputs_dict['input_ids']= total_input[:, j:next_index].reshape(1, window)
108
+ elif name == 'attention_mask':
109
+ inputs_dict['attention_mask'] = np.concatenate((np.zeros((1, total_sequence-next_index), dtype = 'int64'), np.ones((1, next_index), dtype = 'int64')), axis=1)
110
+ else:
111
+ old_name = name.replace("past_key_values", "present")
112
+ inputs_dict[name] = outs_dictionary[old_name][:, next_index-old_j:context-window+(next_index - old_j), :]
113
+
114
+ answer = tokenizer.decode(total_input[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
115
+ return answer
116
+ ```
117
+ We now run the inferences:
118
+
119
+ ```python
120
+ tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-7b")
121
+ model_path = "sarashina2-7b-kvc-fp16-onnx/model.onnx"
122
+
123
+ max_gen_tokens = 20 #number of tokens we want tog eneral
124
+ total_sequence = 128 #total sequence_length
125
+ context = 1024 #the context to extend the kvc
126
+ window = 16 #number of tokens we want to parse at the time
127
+ messages = [
128
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
129
+ {"role": "user", "content": "Who are you?"},
130
+ ]
131
+
132
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
133
+
134
+ generated = generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context)
135
+ print(generated)
136
+ ```
model.embed_tokens.weight ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94c3774246636a237ee733630b2298e6b3554738f0cbfe706cdc986f1b7518cd
3
+ size 838860800
model.layers.0.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.0.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.1.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.1.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.10.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.10.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.11.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.11.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.12.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.12.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.13.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.13.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.14.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.14.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.15.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.15.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.16.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.16.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.17.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.17.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.18.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.18.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.19.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.19.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.2.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.2.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.20.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.20.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.21.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.21.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.22.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.22.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.23.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.23.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.24.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.24.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.25.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.25.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.26.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.26.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.27.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.27.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.28.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.28.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.29.input_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.29.post_attention_layernorm.weight ADDED
Binary file (8.19 kB). View file
 
model.layers.3.input_layernorm.weight ADDED
Binary file (8.19 kB). View file