Mxode commited on
Commit
cce4818
·
verified ·
1 Parent(s): a83d944

Create README_zh-CN.md

Browse files
Files changed (1) hide show
  1. README_zh-CN.md +56 -0
README_zh-CN.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NanoLM-70M-Instruct-v1
2
+
3
+ [English](README.md) | 简体中文
4
+
5
+
6
+ ## Introduction
7
+
8
+ 为了探究小模型的潜能,我尝试构建一系列小模型,并存放于 [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2)。
9
+
10
+ 这是 NanoLM-70M-Instruct-v1。该模型目前仅支持**英文**。
11
+
12
+
13
+ ## 模型详情
14
+
15
+ NanoLM-70M-Instruct-v1 的分词器和模型架构与 [SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M) 相同,但层数从30减少到12。
16
+
17
+ 因此,NanoLM-70M-Instruct-v1 的参数量只有 70 M。
18
+
19
+ 尽管如此,NanoLM-70M-Instruct-v1 仍展示了指令跟随能力。
20
+
21
+
22
+ ## 如何使用
23
+
24
+ ```python
25
+ import torch
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+
28
+ model_path = 'Mxode/NanoLM-70M-Instruct-v1'
29
+
30
+ model = AutoModelForCausalLM.from_pretrained(model_path).to('cuda:0', torch.bfloat16)
31
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
32
+
33
+
34
+ text = "Why is it important for entrepreneurs to prioritize financial management?"
35
+ prompt = tokenizer.apply_chat_template(
36
+ [
37
+ {'role': 'system', 'content': 'You are a helpful assistant.'},
38
+ {'role': 'user', 'content': text}
39
+ ],
40
+ add_generation_prompt=True,
41
+ tokenize=True,
42
+ return_tensors='pt'
43
+ ).to('cuda:0')
44
+
45
+
46
+ outputs = model.generate(
47
+ prompt,
48
+ max_new_tokens=1024,
49
+ do_sample=True,
50
+ temperature=0.7,
51
+ repetition_penalty=1.1,
52
+ eos_token_id=tokenizer.eos_token_id,
53
+ )
54
+ response = tokenizer.decode(outputs[0])
55
+ print(response)
56
+ ```