sophia / finetuning_args.json
Alignment-Lab-AI's picture
falcone 7b
ab10cc5
raw
history blame contribute delete
230 Bytes
{
"finetuning_type": "lora",
"lora_alpha": 32.0,
"lora_dropout": 0.1,
"lora_rank": 8,
"lora_target": [
"query_key_value"
],
"name_module_trainable": "mlp",
"num_hidden_layers": 32,
"num_layer_trainable": 3
}