Upload folder using huggingface_hub

#1
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +53 -0
  3. mistral-7b-instruct-v1.0-f16.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mistral-7b-instruct-v1.0-f16.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - finetuned
6
  ---
7
+
8
+ # GGUF version of version of Mistral-7B-Instruct-v0.1
9
+
10
+ GGUF version of version of Mistral-7B-Instruct-v0.1 compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp)
11
+
12
+ This the unquantized fp16 version of the model.
13
+
14
+ # Model Card for Mistral-7B-Instruct-v0.1
15
+
16
+ The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
17
+
18
+ For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
19
+
20
+ ## Instruction format
21
+
22
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
23
+
24
+ E.g.
25
+
26
+ ```python
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ device = "cuda" # the device to load the model onto
30
+
31
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
32
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
33
+
34
+ text = "<s>[INST] What is your favourite condiment? [/INST]"
35
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
36
+ "[INST] Do you have mayonnaise recipes? [/INST]"
37
+
38
+ encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)
39
+
40
+ model_inputs = encodeds.to(device)
41
+ model.to(device)
42
+
43
+ generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
44
+ decoded = tokenizer.batch_decode(generated_ids)
45
+ print(decoded[0])
46
+ ```
47
+
48
+ ## Model Architecture
49
+ This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
50
+ - Grouped-Query Attention
51
+ - Sliding-Window Attention
52
+ - Byte-fallback BPE tokenizer
53
+
54
+ ## The Mistral AI Team
55
+
56
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
mistral-7b-instruct-v1.0-f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:235ef702af48f30f11a1af8caff56140340266cd4cabb00809cd0ddb14efa899
3
+ size 14484731424