See axolotl config
axolotl version: 0.6.0
# Original base model config
# base_model: Dans-DiscountModels/Meta-Llama-3.2-3B-ChatML
# Using smaller model instead
base_model: Emm9625/Llama-3.2-1B-chatml
# Original tokenizer config
# tokenizer_config: Dans-DiscountModels/Meta-Llama-3.2-3B-ChatML
# Using matching tokenizer for smaller model
tokenizer_config: Emm9625/Llama-3.2-1B-chatml
# Model loading configuration
load_in_8bit: false
load_in_4bit: false
strict: false
# Chat template configuration
chat_template: chatml
# Dataset configuration
datasets:
- path: Emm9625/textwork-00
name: smol-constraints
split: train
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
train_on_eos: turn
# shards: 2
# shard_idx: 0
- path: Emm9625/textwork-00
name: smol-rewrite
split: train
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
train_on_eos: turn
# shards: 2
# shard_idx: 0
- path: Emm9625/textwork-00
name: smol-summarize
split: train
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
train_on_eos: turn
# shards: 2
# shard_idx: 0
test_datasets:
- path: Emm9625/textwork-00
name: smol-constraints
split: test
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
train_on_eos: turn
shards: 10
shard_idx: 0
- path: Emm9625/textwork-00
name: smol-rewrite
split: test
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
train_on_eos: turn
shards: 10
shard_idx: 0
- path: Emm9625/textwork-00
name: smol-summarize
split: test
type: chat_template
field_messages: messages
message_field_role: role
message_field_content: content
train_on_eos: turn
shards: 10
shard_idx: 0
dataset_prepared_path: last_run_prepared
output_dir: /tmp/meow/
hub_model_id: Emm9625/textwork-00-1B-25-01-18
hub_strategy: checkpoint
# Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets
# Required to be true when used in combination with `push_dataset_to_hub`
hf_use_auth_token: true
# Model configuration
sequence_len: 4096
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
# Unsloth optimizations
unsloth_cross_entropy_loss: true
unsloth_rms_norm: true
unsloth_rope: true
#Lora Optimizations
# unsloth_lora_mlp: true
# unsloth_lora_qkv: true
# unsloth_lora_o: true
# Training configuration
gradient_accumulation_steps: 2
micro_batch_size: 8
num_epochs: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
torch_compile: auto
train_on_inputs: false
group_by_length: false
bf16: true
gradient_checkpointing: true
flash_attention: true
# Training monitoring
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_ratio: 0.1
weight_decay: 0.00
saves_per_epoch: 1
evals_per_epoch: 5
save_safetensors: true
wandb_project: textwork-01
logging_steps: 1
# Special tokens configuration
special_tokens:
eos_token: "<|im_end|>"
bos_token: "<|im_start|>"
fsdp:
fsdp_config:
textwork-00-1B-25-01-18
This model is a fine-tuned version of Emm9625/Llama-3.2-1B-chatml on the Emm9625/textwork-00, the Emm9625/textwork-00 and the Emm9625/textwork-00 datasets. It achieves the following results on the evaluation set:
- Loss: 0.6892
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 109
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.3666 | 0.0009 | 1 | 1.3578 |
0.7961 | 0.2003 | 220 | 0.7602 |
0.7336 | 0.4005 | 440 | 0.7124 |
0.7415 | 0.6008 | 660 | 0.6946 |
0.7035 | 0.8011 | 880 | 0.6892 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Emm9625/textwork-00-1B-25-01-18
Base model
Emm9625/Llama-3.2-1B-chatml