File size: 1,491 Bytes
68e359a
 
d096447
 
 
68e359a
d096447
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a73b507
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: llama2
language:
- ko
library_name: transformers
---

---
license: llama2
language:
- ko
library_name: transformers
base_model: beomi/llama-2-ko-7b
pipeline_tag: text-generation
---

# **msy127/ft_240201_01**


## Our Team

| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
|     David Sohn         |      David Sohn    |


## **Model Details**

### **Base Model**

[beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)

### **Trained On**

-   **OS**: Ubuntu 22.04
-   **GPU**: A100 40GB 1ea
-   **transformers**: v4.37

### **Instruction format**

It follows **Custom** format.

E.g.

```python
text = """\
<|user|>
κ±΄κ°•ν•œ μ‹μŠ΅κ΄€μ„ λ§Œλ“€κΈ° μœ„ν•΄μ„œλŠ” μ–΄λ–»κ²Œ ν•˜λŠ”κ²ƒμ΄ μ’‹μ„κΉŒμš”?
<|assistant|>
"""
```


## **Implementation Code**

This model contains the chat_template instruction format.  
You can use the code below.

```python
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="msy127/ft_240201_01")

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("msy127/ft_240201_01")
model = AutoModelForCausalLM.from_pretrained("msy127/ft_240201_01")
```

## **Introduction to our service platform**
- AI Companion service platform that talks while looking at your face.
- You can preview the future of the world's best, character.ai.
- https://livetalkingai.com