MIDIstral

Pixtral for MIDI

MIDIstral-Logo.jpg


This model is a fine-tuned version of mistral-community/pixtral-12b on MIDIstral dataset. It achieves the following results on the evaluation set:

  • eval_loss: 1.4113
  • eval_runtime: 29.753
  • eval_samples_per_second: 3.832
  • eval_steps_per_second: 0.504
  • epoch: 0.3605
  • step: 5130

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 1

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.3
  • Pytorch 2.4.1
  • Datasets 3.1.0
  • Tokenizers 0.20.4

Project Los Angeles

Tegridy Code 2024

Downloads last month
27
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Model tree for asigalov61/MIDIstral_pixtral

Adapter
(4)
this model

Dataset used to train asigalov61/MIDIstral_pixtral