jylee420 commited on
Commit
5462782
·
verified ·
1 Parent(s): 0576947

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -15
README.md CHANGED
@@ -6,8 +6,7 @@ tags: []
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
 
12
  ## Model Details
13
 
@@ -15,27 +14,23 @@ tags: []
15
 
16
  <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
  - **Developed by:** [email protected]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
  - **Language(s) (NLP):** Korean/English
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
39
 
40
  ### Direct Use
41
 
 
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
+ This model card corresponds to the 2B base version of the Gemma model.
 
10
 
11
  ## Model Details
12
 
 
14
 
15
  <!-- Provide a longer summary of what this model is. -->
16
 
17
+ This is a model that separates terms into words and describes each separated word.
18
 
19
  - **Developed by:** [email protected]
 
 
 
20
  - **Language(s) (NLP):** Korean/English
 
 
 
 
21
 
 
 
 
 
 
22
 
23
  ## Uses
24
 
25
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
26
+ model = AutoModelForCausalLM.from_pretrained(
27
+ model_path,
28
+ vocab_size=len(tokenizer),
29
+ torch_dtype = torch.float16,
30
+ use_cache=False,
31
+ #attn_implementation="flash_attention_2",
32
+ device_map="auto")
33
+
34
 
35
  ### Direct Use
36