davidsi commited on
Commit
6053454
·
verified ·
1 Parent(s): 18ba7ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -13,13 +13,12 @@ specifically for expertise on AMD technologies and python coding.
13
 
14
  <!-- Provide a longer summary of what this model is. -->
15
 
16
- This is the model card of a 🤗 transformers model that has been pushed on the Hub.
17
- This model card has been automatically generated.
18
 
19
  - **Developed by:** David Silverstein
20
- - **Model type:** [More Information Needed]
21
  - **Language(s) (NLP):** English, Python
22
- - **License:** [More Information Needed]
23
  - **Finetuned from model meta-llama/Meta-Llama-3.1-8B-Instruct**
24
 
25
  ### Model Sources [optional]
@@ -47,7 +46,7 @@ limitations of the model. More information needed for further recommendations.
47
 
48
  ## How to Get Started with the Model
49
 
50
- Use the code below to get started with the model.
51
 
52
  import torch
53
  from transformers import AutoTokenizer, AutoModelForCausalLM
@@ -68,14 +67,8 @@ The training set consisted of 1658 question/answer pairs in Alpaca format.
68
 
69
  [More Information Needed]
70
 
71
- ### Training Procedure
72
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
73
-
74
- #### Preprocessing [optional]
75
- [More Information Needed]
76
-
77
  #### Training Hyperparameters
78
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
79
 
80
  ## Evaluation
81
  <!-- This section describes the evaluation protocols and provides the results. -->
 
13
 
14
  <!-- Provide a longer summary of what this model is. -->
15
 
16
+ This is the model card of a 🤗 transformers model that has been
17
+ pushed on the Hub. This model card has been automatically generated.
18
 
19
  - **Developed by:** David Silverstein
 
20
  - **Language(s) (NLP):** English, Python
21
+ - **License:** Free to use under Llama 3.1 licensing terms without warranty
22
  - **Finetuned from model meta-llama/Meta-Llama-3.1-8B-Instruct**
23
 
24
  ### Model Sources [optional]
 
46
 
47
  ## How to Get Started with the Model
48
 
49
+ Use the code below to get started with the model:
50
 
51
  import torch
52
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
67
 
68
  [More Information Needed]
69
 
 
 
 
 
 
 
70
  #### Training Hyperparameters
71
+ - **Training regime:** [bf16 non-mixed precision] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
72
 
73
  ## Evaluation
74
  <!-- This section describes the evaluation protocols and provides the results. -->