okeanos commited on
Commit
cf44bab
·
verified ·
1 Parent(s): fda6abe

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - codellama/CodeLlama-34b-Instruct-hf
7
+ - Phind/Phind-CodeLlama-34B-v2
8
+ base_model:
9
+ - codellama/CodeLlama-34b-Instruct-hf
10
+ - Phind/Phind-CodeLlama-34B-v2
11
+ ---
12
+
13
+ # uptimeai-8273
14
+
15
+ uptimeai-8273 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf)
17
+ * [Phind/Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
18
+
19
+ ## 🧩 Configuration
20
+
21
+ ```yaml
22
+ models:
23
+ - model: codellama/CodeLlama-34b-Instruct-hf
24
+ parameters:
25
+ density: [1, 0.7, 0.1] # density gradient
26
+ weight: 1.0
27
+ - model: Phind/Phind-CodeLlama-34B-v2
28
+ parameters:
29
+ density: 0.5
30
+ weight: [0, 0.3, 0.7, 1] # weight gradient
31
+ merge_method: dare_ties
32
+ base_model: codellama/CodeLlama-34b-Instruct-hf
33
+ parameters:
34
+ normalize: true
35
+ int8_mask: true
36
+ dtype: float16
37
+ ```
38
+
39
+ ## 💻 Usage
40
+
41
+ ```python
42
+ !pip install -qU transformers accelerate
43
+
44
+ from transformers import AutoTokenizer
45
+ import transformers
46
+ import torch
47
+
48
+ model = "okeanos/uptimeai-8273"
49
+ messages = [{"role": "user", "content": "What is a large language model?"}]
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained(model)
52
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
53
+ pipeline = transformers.pipeline(
54
+ "text-generation",
55
+ model=model,
56
+ torch_dtype=torch.float16,
57
+ device_map="auto",
58
+ )
59
+
60
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
61
+ print(outputs[0]["generated_text"])
62
+ ```