Eurdem commited on
Commit
0d00bf4
1 Parent(s): c8c9b8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -5
README.md CHANGED
@@ -6,24 +6,26 @@ tags:
6
  - llama-3
7
  language:
8
  - en
 
9
  pipeline_tag: text-generation
10
  library_name: transformers
11
  ---
12
 
13
- # Megatron_llama3_2x8B
14
 
 
15
  Megatron_llama3_2x8B is a Mixure of Experts (MoE) (two llama3 models)
16
 
17
 
18
- ## 💻 Usage
19
-
20
  ```python
21
  !pip install -qU transformers bitsandbytes accelerate
22
 
 
 
 
23
  model_id = "Eurdem/Megatron_llama3_2x8B"
24
 
25
  tokenizer = AutoTokenizer.from_pretrained(model_id)
26
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_4bit= True)
27
 
28
  messages = [
29
  {"role": "system", "content": "You are a helpful chatbot who always responds friendly."},
@@ -42,4 +44,52 @@ outputs = model.generate(input_ids,
42
  )
43
  response = outputs[0][input_ids.shape[-1]:]
44
  print(tokenizer.decode(response, skip_special_tokens=True))
45
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - llama-3
7
  language:
8
  - en
9
+ - tr
10
  pipeline_tag: text-generation
11
  library_name: transformers
12
  ---
13
 
 
14
 
15
+ ## 💻 For English
16
  Megatron_llama3_2x8B is a Mixure of Experts (MoE) (two llama3 models)
17
 
18
 
 
 
19
  ```python
20
  !pip install -qU transformers bitsandbytes accelerate
21
 
22
+ from transformers import AutoTokenizer, AutoModelForCausalLM
23
+ import torch
24
+
25
  model_id = "Eurdem/Megatron_llama3_2x8B"
26
 
27
  tokenizer = AutoTokenizer.from_pretrained(model_id)
28
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_8bit= True)
29
 
30
  messages = [
31
  {"role": "system", "content": "You are a helpful chatbot who always responds friendly."},
 
44
  )
45
  response = outputs[0][input_ids.shape[-1]:]
46
  print(tokenizer.decode(response, skip_special_tokens=True))
47
+ ```
48
+
49
+ # Megatron_llama3_2x8B
50
+
51
+
52
+
53
+ ## 💻 Türkçe İçin
54
+
55
+ ```python
56
+ !pip install -qU transformers bitsandbytes accelerate
57
+
58
+ from transformers import AutoTokenizer, AutoModelForCausalLM
59
+ import torch
60
+
61
+ model_id = "Eurdem/Megatron_llama3_2x8B"
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
64
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_4bit= True)
65
+
66
+ messages = [
67
+ {"role": "system", "content": "Sen Defne isimli Türkçe konuşan bir chatbotsun."},
68
+ {"role": "user", "content": "Sana 2 sorum var. 1) Sen kimsin? 2)f(x)=3x^2+4x+12 ise f(3) kaçtır?"}
69
+ ]
70
+
71
+ input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
72
+
73
+ outputs = model.generate(input_ids,
74
+ max_new_tokens=1024,
75
+ do_sample=True,
76
+ temperature=0.7,
77
+ top_p=0.7,
78
+ top_k=500,
79
+ eos_token_id = tokenizer.eos_token_id
80
+ )
81
+ response = outputs[0][input_ids.shape[-1]:]
82
+ print(tokenizer.decode(response, skip_special_tokens=True))
83
+ ```
84
+
85
+ ### Çıktı
86
+ ```Merhaba! Ben Sen Defne, Türkçe konuşan bir chatbotum. Hizmetinizdeyim.
87
+
88
+ Sorunuzun 2. kısmı için, f(x) = 3x^2 + 4x + 12 formülünü ele alalım. f(3)'ün hesabını yapalım:
89
+
90
+ f(3) = 3(3)^2 + 4(3) + 12
91
+ = 3(9) + 12 + 12
92
+ = 27 + 24
93
+ = 51
94
+
95
+ Bu nedenle, f(3) 51'dir.```