--- license: apache-2.0 base_model: Qwen/Qwen2-7B tags: - axolotl - generated_from_trainer model-index: - name: Einstein-v7-Qwen2-7B results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: Qwen/Qwen2-7B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/buzz_unstacked_chosen_math_removed_filtered.json ds_type: json type: alpaca conversation: chatml - path: data/capybara_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/everythinglm-data-v3_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/gpt4_data_lmys_1m_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/gpteacher-instruct-special-alpaca.json ds_type: json type: gpteacher conversation: chatml - path: data/merged_all.json ds_type: json type: alpaca conversation: chatml - path: data/no_robots_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/oasst_top1_from_fusechatmixture_sharegpt.json ds_type: json type: sharegpt strict: false conversation: chatml - path: data/pippa_bagel_repo_3k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/rpguild_quarter_alignment_lab_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/sharegpt_gpt4_english.json ds_type: json type: sharegpt conversation: chatml - path: data/slimorca_dedup_filtered_95k_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/soda_diaolog_longest_tenth_buzz_sharegpt.json ds_type: json type: sharegpt conversation: chatml - path: data/synthia-v1.3_sharegpt_12500.json ds_type: json type: sharegpt conversation: chatml - path: data/system_conversations_dolphin_sharegpt.json ds_type: json type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.002 output_dir: ./Einstein-v7-Qwen2-7B-model sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: false wandb_project: Einstein wandb_entity: wandb_watch: wandb_name: wandb_log_model: hub_model_id: Weyaxi/Einstein-v7-Qwen2-7B gradient_accumulation_steps: 4 micro_batch_size: 6 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.00001 # look train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: unsloth gradient_checkpointing_kwargs: use_reentrant: true # look early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 2 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ```

# Einstein-v7-Qwen2-7B This model is a fine-tuned version of [Qwen/Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 192 - total_eval_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9189 | 0.0 | 1 | 0.8840 | | 0.7368 | 0.5 | 125 | 0.7193 | | 0.7406 | 1.0 | 250 | 0.7037 | | 0.6593 | 1.48 | 375 | 0.6996 | | 0.6754 | 1.97 | 500 | 0.6983 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.4.0.dev20240508+rocm6.1 - Datasets 2.15.0 - Tokenizers 0.15.0