RichardErkhov's picture
uploaded readme
7a24376 verified

Quantization made by Richard Erkhov.

Github

Discord

Request more models

pygm-350m-experimental - AWQ

Original model description:

tags: - generated_from_trainer metrics: - accuracy model-index: - name: pygmalion-350m results: []

pygmalion-350m

This model is a fine-tuned version of PygmalionAI/pygmalion-350m on a 2.4MB dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2731
  • Accuracy: 0.5187

Model description

A proof-of-concept model based on PygmalionAI/pygmalion-350m, which was in turn based on OPT-350m.

This model was fine-tuned purely for testing purposes.

Fine-tuning process

Fine-tuned on an A100-80GB with HF's run_clm.py script. It was run through 3 epochs with 8 batch size using 2.4MB dataset (split 75/25 between training and validation sets).

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.0
  • Tokenizers 0.13.2