davidsi commited on
Commit
43c5650
·
verified ·
1 Parent(s): f8e2814

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -3,6 +3,10 @@ library_name: transformers
3
  language:
4
  - en
5
  pipeline_tag: text-generation
 
 
 
 
6
  ---
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
@@ -48,12 +52,14 @@ limitations of the model. More information needed for further recommendations.
48
 
49
  Use the code below to get started with the model:
50
 
 
51
  import torch
52
  from transformers import AutoTokenizer, AutoModelForCausalLM
53
 
54
  model_name = 'davidsi/Llama3_1-8B-Instruct-AMD-python'
55
  tokenizer = AutoTokenizer.from_pretrained(model_name)
56
  llm = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
 
57
 
58
  [More Information Needed]
59
 
@@ -81,4 +87,4 @@ The training set consisted of 1658 question/answer pairs in Alpaca format.
81
  ### Model Architecture and Objective
82
 
83
  This model is a finetuned version of Llama 3.1, which is an auto-regressive language
84
- model that uses an optimized transformer architecture.
 
3
  language:
4
  - en
5
  pipeline_tag: text-generation
6
+ license: llama3.1
7
+ tags:
8
+ - Safetensors
9
+ - Text Generation
10
  ---
11
  <!-- Provide a quick summary of what the model is/does. -->
12
 
 
52
 
53
  Use the code below to get started with the model:
54
 
55
+ ~~~
56
  import torch
57
  from transformers import AutoTokenizer, AutoModelForCausalLM
58
 
59
  model_name = 'davidsi/Llama3_1-8B-Instruct-AMD-python'
60
  tokenizer = AutoTokenizer.from_pretrained(model_name)
61
  llm = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
62
+ ~~~
63
 
64
  [More Information Needed]
65
 
 
87
  ### Model Architecture and Objective
88
 
89
  This model is a finetuned version of Llama 3.1, which is an auto-regressive language
90
+ model that uses an optimized transformer architecture.