DevPanda004 commited on
Commit
d8649f8
·
verified ·
1 Parent(s): 24b1b2a

Model save

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -3,8 +3,6 @@ library_name: peft
3
  license: cc-by-nc-4.0
4
  base_model: facebook/musicgen-small
5
  tags:
6
- - text-to-audio
7
- - indian
8
  - generated_from_trainer
9
  model-index:
10
  - name: musicgen-small-ft1
@@ -16,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # musicgen-small-ft1
18
 
19
- This model is a fine-tuned version of [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) on the DevPanda004/indian-music-and-metadata-3 dataset.
20
 
21
  ## Model description
22
 
@@ -36,14 +34,14 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
- - train_batch_size: 8
40
  - eval_batch_size: 8
41
  - seed: 42
42
  - gradient_accumulation_steps: 8
43
- - total_train_batch_size: 64
44
  - optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
- - num_epochs: 2
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
 
3
  license: cc-by-nc-4.0
4
  base_model: facebook/musicgen-small
5
  tags:
 
 
6
  - generated_from_trainer
7
  model-index:
8
  - name: musicgen-small-ft1
 
14
 
15
  # musicgen-small-ft1
16
 
17
+ This model is a fine-tuned version of [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) on an unknown dataset.
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0002
37
+ - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 128
42
  - optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 1
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results