ttagu99 commited on
Commit
ce61722
ยท
1 Parent(s): 2b06ce2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Intel/orca_dpo_pairs
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ # Model Card for mncai/agiin-11.1B-v0.1
10
+
11
+ ### Introduction of MindsAndCompany
12
+
13
+ https://mnc.ai/
14
+
15
+ We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
16
+
17
+ ### Model Summary
18
+ This model was built based on the Mistral architecture. It was inspired by neural connection technology and rehabilitation therapy.
19
+ I have created a new model architecture that does not require pretraining, and training the model is sufficient with just one H100 for 7 hours.
20
+
21
+ ### Data
22
+ Intel/orca_dpo_pairs (DPO)
23
+
24
+ ### Surgery and Training
25
+
26
+ stack mistral 50 layers and DPO.
27
+
28
+ ### How to Use
29
+
30
+ ```python
31
+ message = [
32
+ {"role": "system", "content": "You are a helpful assistant chatbot."},
33
+ {"role": "user", "content": "๋‘ ๊ฐœ์˜ ๊ตฌ๊ฐ€ ๊ฐ๊ฐ ์ง€๋ฆ„์ด 1, 2์ผ๋•Œ ๋‘ ๊ตฌ์˜ ๋ถ€ํ”ผ๋Š” ๋ช‡๋ฐฐ์ง€? ์„ค๋ช…๋„ ๊ฐ™์ด ํ•ด์ค˜."}
34
+ ]
35
+ tokenizer = AutoTokenizer.from_pretrained(hf_model)
36
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
37
+
38
+ pipeline = transformers.pipeline(
39
+ "text-generation",
40
+ model=hf_model,
41
+ tokenizer=tokenizer
42
+ )
43
+
44
+
45
+ sequences = pipeline(
46
+ prompt,
47
+ do_sample=True,
48
+ temperature=0.7,
49
+ top_p=0.9,
50
+ num_return_sequences=1,
51
+ max_length=512,
52
+ )
53
+ print(sequences[0]['generated_text'])
54
+ ```
55
+
56
+ ### Contact
57
+ If you have any questions, please raise an issue or contact us at [email protected]