Qwen-2.5-14B-MINUS / README.md
bamec66557's picture
Upload folder using huggingface_hub
cc05957 verified
---
base_model:
- sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
- djuna/Q2.5-Veltha-14B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [sometimesanotion/Qwen2.5-14B-Vimarckoso-v3](https://huggingface.co/sometimesanotion/Qwen2.5-14B-Vimarckoso-v3)
* [djuna/Q2.5-Veltha-14B](https://huggingface.co/djuna/Q2.5-Veltha-14B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: djuna/Q2.5-Veltha-14B
- model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
merge_method: slerp
base_model: djuna/Q2.5-Veltha-14B
dtype: bfloat16
parameters:
t: [0, 0.2, 0.8, 0.1, 0] # Skewed towards the second model
regularization:
- method: gradient_penalty
scale: 0.07 # Increased for stronger regularization
- method: weight_clipping
clip_range: [-0.2, 0.2] # Widened range
- method: random_noise
scale: 0.005 # Reduced to avoid excessive noise
- method: attention_dropout
scale: 0.03 # Increased dropout
postprocessing:
- operation: entropy_regularization
scale: 0.07 # Increased for stronger effect
- operation: non_linear_scaling
parameters:
function: gelu # Changed to GELU for potentially better performance
- operation: sharpening
intensity: 0.7 # Increased sharpening
- operation: gaussian_smoothing
sigma: 0.2 # Reduced smoothing for more detail
- operation: normalize
- operation: dynamic_scaling
scale_range: [0.97, 1.03] # Slightly wider range
- operation: smoothing
parameters:
adaptive: true
range: [0.97, 1.03] # Slightly wider range
kernel_size: 5 # Increased kernel size for more aggressive smoothing
```