Suparious commited on
Commit
7da8ce7
·
1 Parent(s): 9609a08

add model card

Browse files
Files changed (1) hide show
  1. README.md +139 -1
README.md CHANGED
@@ -1,3 +1,141 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - jeiku/FloraBase
4
+ - jeiku/Synthetic_Soul_1k_Mistral_128
5
+ library_name: transformers
6
+ tags:
7
+ - finetune
8
+ - finetuned
9
+ - quantized
10
+ - 4-bit
11
+ - AWQ
12
+ - transformers
13
+ - pytorch
14
+ - mistral
15
+ - instruct
16
+ - text-generation
17
+ - conversational
18
+ - license:apache-2.0
19
+ - autotrain_compatible
20
+ - endpoints_compatible
21
+ - text-generation-inference
22
+ - chatml
23
+ license: cc-by-sa-4.0
24
+ datasets:
25
+ - ResplendentAI/Synthetic_Soul_1k
26
+ language:
27
+ - en
28
+ library_name: transformers
29
+ model_creator: ResplendentAI
30
+ model_name: Flora-7B
31
+ model_type: mistral
32
+ pipeline_tag: text-generation
33
+ inference: false
34
+ prompt_template: '<|im_start|>system
35
+
36
+ {system_message}<|im_end|>
37
+
38
+ <|im_start|>user
39
+
40
+ {prompt}<|im_end|>
41
+
42
+ <|im_start|>assistant
43
+
44
+ '
45
+ quantized_by: Suparious
46
  ---
47
+ # ResplendentAI/Flora-7B AWQ
48
+
49
+ - Model creator: [ResplendentAI](https://huggingface.co/ResplendentAI)
50
+ - Original model: [Flora-7B](https://huggingface.co/ResplendentAI/Flora-7B)
51
+
52
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/QnP0CnXCA9pTocetkrJht.jpeg)
53
+
54
+ ## Model Summary
55
+
56
+ The following YAML configuration was used to produce this model:
57
+
58
+ ```yaml
59
+ merge_method: linear
60
+ models:
61
+ - model: jeiku/FloraBase+jeiku/Synthetic_Soul_1k_Mistral_128
62
+ parameters:
63
+ weight: 1
64
+ dtype: float16
65
+ ```
66
+
67
+ ## How to use
68
+
69
+ ### Install the necessary packages
70
+
71
+ ```bash
72
+ pip install --upgrade autoawq autoawq-kernels
73
+ ```
74
+
75
+ ### Example Python code
76
+
77
+ ```python
78
+ from awq import AutoAWQForCausalLM
79
+ from transformers import AutoTokenizer, TextStreamer
80
+
81
+ model_path = "solidrust/Flora-7B-AWQ"
82
+ system_message = "You are Flora, incarnated as a powerful AI."
83
+
84
+ # Load model
85
+ model = AutoAWQForCausalLM.from_quantized(model_path,
86
+ fuse_layers=True)
87
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
88
+ trust_remote_code=True)
89
+ streamer = TextStreamer(tokenizer,
90
+ skip_prompt=True,
91
+ skip_special_tokens=True)
92
+
93
+ # Convert prompt to tokens
94
+ prompt_template = """\
95
+ <|im_start|>system
96
+ {system_message}<|im_end|>
97
+ <|im_start|>user
98
+ {prompt}<|im_end|>
99
+ <|im_start|>assistant"""
100
+
101
+ prompt = "You're standing on the surface of the Earth. "\
102
+ "You walk one mile south, one mile west and one mile north. "\
103
+ "You end up exactly where you started. Where are you?"
104
+
105
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
106
+ return_tensors='pt').input_ids.cuda()
107
+
108
+ # Generate output
109
+ generation_output = model.generate(tokens,
110
+ streamer=streamer,
111
+ max_new_tokens=512)
112
+
113
+ ```
114
+
115
+ ### About AWQ
116
+
117
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
118
+
119
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
120
+
121
+ It is supported by:
122
+
123
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
124
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
125
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
126
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
127
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
128
+
129
+ ## Prompt template: ChatML
130
+
131
+ ```plaintext
132
+ <|im_start|>system
133
+ {system_message}<|im_end|>
134
+ <|im_start|>user
135
+ {prompt}<|im_end|>
136
+ <|im_start|>assistant
137
+ ```
138
+
139
+ ## Other Quant formats
140
+
141
+ exl2 and gguf by Bartowski: https://huggingface.co/bartowski/Flora_7B-exl2 https://huggingface.co/bartowski/Flora_7B-GGUF