Remek commited on
Commit
f2022b1
·
verified ·
1 Parent(s): 1031cfb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pl
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - finetuned
8
+ - gguf
9
+ inference: false
10
+ pipeline_tag: text-generation
11
+ ---
12
+ <p align="center">
13
+ <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png">
14
+ </p>
15
+
16
+ # Bielik-11B-v2.2-Instruct-Quanto-8bit
17
+ This model was converted to Quanto format from [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct).
18
+
19
+ ## About Quanto
20
+ Optimum Quanto is a pytorch quantization backend for optimum. Model can be loaded using:
21
+
22
+ ```
23
+ from optimum.quanto import QuantizedModelForCausalLM
24
+
25
+ qmodel = QuantizedModelForCausalLM.from_pretrained('speakleash/Bielik-11B-v2.2-Instruct-Quanto-8bit')
26
+ ```
27
+
28
+ ### Model description:
29
+
30
+ * **Developed by:** [SpeakLeash](https://speakleash.org/)
31
+ * **Language:** Polish
32
+ * **Model type:** causal decoder-only
33
+ * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct)
34
+ * **Finetuned from:** [Bielik-11B](https://huggingface.co/speakleash/Bielik-11B)
35
+ * **License:** apache-2.0
36
+
37
+
38
+ ## Contact Us
39
+
40
+ If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39).