squarelike commited on
Commit
058dd06
β€’
1 Parent(s): ff52ebe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -2
README.md CHANGED
@@ -7,14 +7,77 @@ language:
7
  - ko
8
  pipeline_tag: translation
9
  ---
10
- [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
11
 
12
- Prompt Template:
 
 
 
 
 
 
 
 
 
 
 
13
  ```
14
  ### ν•œκ΅­μ–΄: {sentence}</끝>
15
  ### μ˜μ–΄:
16
  ```
 
17
  ```
18
  ### μ˜μ–΄: {sentence}</끝>
19
  ### ν•œκ΅­μ–΄:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ```
 
7
  - ko
8
  pipeline_tag: translation
9
  ---
 
10
 
11
+ # Gugugo-koen-7B-V1.1
12
+ Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
13
+ ![Gugugo](./logo.png)
14
+
15
+ **Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
16
+
17
+ **Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation).
18
+
19
+ I trained with 1x A6000 GPUs for 90 hours.
20
+
21
+ ## **Prompt Template**
22
+ **KO->EN**
23
  ```
24
  ### ν•œκ΅­μ–΄: {sentence}</끝>
25
  ### μ˜μ–΄:
26
  ```
27
+ **EN->KO**
28
  ```
29
  ### μ˜μ–΄: {sentence}</끝>
30
  ### ν•œκ΅­μ–΄:
31
+ ```
32
+
33
+ ## **Implementation Code**
34
+ ```python
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
36
+ import torch
37
+ repo = "squarelike/Gugugo-koen-7B-V1.1"
38
+ model = AutoModelForCausalLM.from_pretrained(
39
+ repo,
40
+ load_in_4bit=True
41
+ device_map='auto'
42
+ )
43
+ tokenizer = AutoTokenizer.from_pretrained(repo)
44
+
45
+ class StoppingCriteriaSub(StoppingCriteria):
46
+ def __init__(self, stops = [], encounters=1):
47
+ super().__init__()
48
+ self.stops = [stop for stop in stops]
49
+
50
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
51
+ for stop in self.stops:
52
+ if torch.all((stop == input_ids[0][-len(stop):])).item():
53
+ return True
54
+
55
+ return False
56
+
57
+ stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]])
58
+ stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
59
+
60
+ def gen(lan="en", x=""):
61
+ if (lan == "ko"):
62
+ prompt = f"### ν•œκ΅­μ–΄: {x}</끝>\n### μ˜μ–΄:"
63
+ else:
64
+ prompt = f"### μ˜μ–΄: {x}</끝>\n### ν•œκ΅­μ–΄:"
65
+ gened = model.generate(
66
+ **tokenizer(
67
+ prompt,
68
+ return_tensors='pt',
69
+ return_token_type_ids=False
70
+ ),
71
+ max_new_tokens=1000,
72
+ temperature=0.1,
73
+ no_repeat_ngram_size=10,
74
+ early_stopping=True,
75
+ do_sample=True,
76
+ eos_token_id=2,
77
+ stopping_criteria=stopping_criteria
78
+ )
79
+ return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "")
80
+
81
+
82
+ print(gen(lan="en", x="Hello, world!"))
83
  ```