This was just an experiment. One that went bad. I actually managed to decrease the ability to do math by using a math dpo dataset with a german translation.

{
    "first_turn": 6.48125,
    "second_turn": 6.19375,
    "categories": {
        "writing": 8.425,
        "roleplay": 7.4,
        "reasoning": 4.6,
        "math": 2.65,
        "coding": 4.6,
        "extraction": 7,
        "stem": 8.0,
        "humanities": 8.025
    },
    "average": 6.3375
}
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for johannhartmann/BreznChatML

Quantizations
1 model