msy127 commited on
Commit
d096447
ยท
verified ยท
1 Parent(s): 7bc46c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
2
  license: llama2
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ language:
4
+ - ko
5
+ library_name: transformers
6
  ---
7
+
8
+ ---
9
+ license: llama2
10
+ language:
11
+ - ko
12
+ library_name: transformers
13
+ base_model: beomi/llama-2-ko-7b
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # **msy127/ft_240201_01**
18
+
19
+
20
+ ## Our Team
21
+
22
+ | Research & Engineering | Product Management |
23
+ | :--------------------: | :----------------: |
24
+ | David Sohn | David Sohn |
25
+
26
+
27
+ ## **Model Details**
28
+
29
+ ### **Base Model**
30
+
31
+ [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
32
+
33
+ ### **Trained On**
34
+
35
+ - **OS**: Ubuntu 22.04
36
+ - **GPU**: A100 40GB 1ea
37
+ - **transformers**: v4.37
38
+
39
+ ### **Instruction format**
40
+
41
+ It follows **Custom** format.
42
+
43
+ E.g.
44
+
45
+ ```python
46
+ text = """\
47
+ <|user|>
48
+ ๊ฑด๊ฐ•ํ•œ ์‹์Šต๊ด€์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด์„œ๋Š” ์–ด๋–ป๊ฒŒ ํ•˜๋Š”๊ฒƒ์ด ์ข‹์„๊นŒ์š”?
49
+ <|assistant|>
50
+ """
51
+ ```
52
+
53
+
54
+ ## **Implementation Code**
55
+
56
+ This model contains the chat_template instruction format.
57
+ You can use the code below.
58
+
59
+ ```python
60
+ # Use a pipeline as a high-level helper
61
+ from transformers import pipeline
62
+
63
+ pipe = pipeline("text-generation", model="msy127/ft_240201_01")
64
+
65
+ # Load model directly
66
+ from transformers import AutoTokenizer, AutoModelForCausalLM
67
+
68
+ tokenizer = AutoTokenizer.from_pretrained("msy127/ft_240201_01")
69
+ model = AutoModelForCausalLM.from_pretrained("msy127/ft_240201_01")
70
+ ```