--- library_name: transformers license: llama3.1 base_model: anastas5/llama3.1-8B-Instruct-rus-test-v2 tags: - generated_from_trainer model-index: - name: home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml base_model: anastas5/llama3.1-8B-Instruct-rus-test-v2 load_in_8bit: false load_in_4bit: false strict: false datasets: - path: anastas5/dataset-rus-test-six type: sharegpt conversation: llama-3 dataset_prepared_path: /home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/prepared_tagengo_rus val_set_size: 0.05 output_dir: /home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false use_wandb: false gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 4 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 5 eval_table_size: saves_per_epoch: 1 debug: weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ```

# home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru This model is a fine-tuned version of [anastas5/llama3.1-8B-Instruct-rus-test-v2](https://huggingface.co/anastas5/llama3.1-8B-Instruct-rus-test-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.957 | 0.0769 | 1 | 0.8838 | | 0.9398 | 0.2308 | 3 | 0.8762 | | 1.0969 | 0.4615 | 6 | 0.8373 | | 0.9608 | 0.6923 | 9 | 0.8226 | | 0.8364 | 0.9231 | 12 | 0.8146 | | 0.7566 | 1.1154 | 15 | 0.7914 | | 0.7927 | 1.3462 | 18 | 0.7818 | | 0.74 | 1.5769 | 21 | 0.7784 | | 0.7247 | 1.8077 | 24 | 0.7783 | | 0.7261 | 2.0385 | 27 | 0.7748 | | 0.7255 | 2.2308 | 30 | 0.7727 | | 0.6439 | 2.4615 | 33 | 0.7726 | | 0.5354 | 2.6923 | 36 | 0.7725 | | 0.638 | 2.9231 | 39 | 0.7734 | | 0.6246 | 3.0769 | 42 | 0.7726 | | 0.5374 | 3.3077 | 45 | 0.7726 | | 0.5539 | 3.5385 | 48 | 0.7729 | | 0.6056 | 3.7692 | 51 | 0.7729 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1