BEGADE commited on
Commit
f174fdc
·
verified ·
1 Parent(s): dd0fc57

Model save

Browse files
Files changed (1) hide show
  1. README.md +56 -14
README.md CHANGED
@@ -1,19 +1,61 @@
1
- This directory includes a few sample datasets to get you started.
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- * `california_housing_data*.csv` is California housing data from the 1990 US
4
- Census; more information is available at:
5
- https://developers.google.com/machine-learning/crash-course/california-housing-data-description
6
 
7
- * `mnist_*.csv` is a small sample of the
8
- [MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is
9
- described at: http://yann.lecun.com/exdb/mnist/
10
 
11
- * `anscombe.json` contains a copy of
12
- [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it
13
- was originally described in
14
 
15
- Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American
16
- Statistician. 27 (1): 17-21. JSTOR 2682899.
17
 
18
- and our copy was prepared by the
19
- [vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
+ datasets:
4
+ - generator
5
+ library_name: peft
6
+ license: llama3
7
+ tags:
8
+ - trl
9
+ - sft
10
+ - generated_from_trainer
11
+ model-index:
12
+ - name: sample_data
13
+ results: []
14
+ ---
15
 
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
 
18
 
19
+ # sample_data
 
 
20
 
21
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
 
 
22
 
23
+ ## Model description
 
24
 
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0002
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 4
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: constant
48
+ - lr_scheduler_warmup_ratio: 0.03
49
+ - num_epochs: 4
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.7.2.dev0
58
+ - Transformers 4.36.2
59
+ - Pytorch 2.1.2+cu121
60
+ - Datasets 2.16.1
61
+ - Tokenizers 0.15.2