merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: djuna/Q2.5-Veltha-14B
- model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
merge_method: slerp
base_model: djuna/Q2.5-Veltha-14B
dtype: bfloat16
parameters:
t: [0, 0.2, 0.8, 0.1, 0] # Skewed towards the second model
regularization:
- method: gradient_penalty
scale: 0.07 # Increased for stronger regularization
- method: weight_clipping
clip_range: [-0.2, 0.2] # Widened range
- method: random_noise
scale: 0.005 # Reduced to avoid excessive noise
- method: attention_dropout
scale: 0.03 # Increased dropout
postprocessing:
- operation: entropy_regularization
scale: 0.07 # Increased for stronger effect
- operation: non_linear_scaling
parameters:
function: gelu # Changed to GELU for potentially better performance
- operation: sharpening
intensity: 0.7 # Increased sharpening
- operation: gaussian_smoothing
sigma: 0.2 # Reduced smoothing for more detail
- operation: normalize
- operation: dynamic_scaling
scale_range: [0.97, 1.03] # Slightly wider range
- operation: smoothing
parameters:
adaptive: true
range: [0.97, 1.03] # Slightly wider range
kernel_size: 5 # Increased kernel size for more aggressive smoothing
- Downloads last month
- 36
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for bamec66557/Qwen-2.5-14B-MINUS
Merge model
this model