2nji commited on
Commit
a5a5de7
·
verified ·
1 Parent(s): 515cd07

Updates readme

Browse files
Files changed (1) hide show
  1. README.md +22 -5
README.md CHANGED
@@ -14,7 +14,7 @@ model-index:
14
  ---
15
 
16
 
17
- # sft
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the open_platypus dataset.
20
  It achieves the following results on the evaluation set:
@@ -23,15 +23,30 @@ It achieves the following results on the evaluation set:
23
 
24
  ## Model description
25
 
26
- More information needed
27
 
28
  ## Intended uses & limitations
29
 
30
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Training and evaluation data
33
 
34
- More information needed
35
 
36
  ## Training procedure
37
 
@@ -51,10 +66,12 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
 
 
 
54
 
55
 
56
  ### Framework versions
57
-
58
  - PEFT 0.11.1
59
  - Transformers 4.42.3
60
  - Pytorch 2.3.1+cu121
 
14
  ---
15
 
16
 
17
+ # Supervised Fine-Tuned Model
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the open_platypus dataset.
20
  It achieves the following results on the evaluation set:
 
23
 
24
  ## Model description
25
 
26
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the open_platypus dataset.
27
 
28
  ## Intended uses & limitations
29
 
30
+ ### How to use
31
+
32
+ You can use this model directly with a pipeline for text classification. Here is an example:
33
+
34
+ ```python
35
+ from transformers import AutoModel, AutoTokenizer
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained("2nji/llama3-platypus")
38
+ model = AutoModel.from_pretrained("2nji/llama3-platypus")
39
+
40
+ inputs = tokenizer("Example input text", return_tensors="pt")
41
+ with torch.no_grad():
42
+ outputs = model(**inputs)
43
+
44
+ print(outputs)
45
+ ```
46
 
47
  ## Training and evaluation data
48
 
49
+ The model was fine-tuned on the open_platypus dataset.
50
 
51
  ## Training procedure
52
 
 
66
 
67
  ### Training results
68
 
69
+ The model was trained on a single NVIDIA H100 GPU with the following results:
70
+ - Loss: 0.6769
71
+ - Accuracy: 0.8116
72
 
73
 
74
  ### Framework versions
 
75
  - PEFT 0.11.1
76
  - Transformers 4.42.3
77
  - Pytorch 2.3.1+cu121