output

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Score

openai/gpt-4 : 0.6158
gemini-pro: 0.515
OpenCarrot-Mix-7B (this) : 0.4425
mistralai/Mixtral-8x7B-Instruct-v0.1 : 0.4304
openai/gpt-3.5-turbo : 0.4217
ํ‰๊ฐ€ ์ง€ํ‘œ ์ ์ˆ˜
AVG_llm_kr_eval 0.4425
EL 0.0522
FA 0.0865
NLI 0.6700
QA 0.5100
RC 0.8937
klue_ner_set_f1 0.0944
klue_re_exact_match 0.0100
kmmlu_preview_exact_match 0.4000
kobest_copa_exact_match 0.8200
kobest_hs_exact_match 0.5500
kobest_sn_exact_match 0.9800
kobest_wic_exact_match 0.6200
korea_cg_bleu 0.0865
kornli_exact_match 0.6400
korsts_pearson 0.8547
korsts_spearman 0.8464

LogicKor

์นดํ…Œ๊ณ ๋ฆฌ ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท 
์ฝ”๋”ฉ(Coding) 7.71 7.71
์ˆ˜ํ•™(Math) 5.57 3.86
์ดํ•ด(Understanding) 6.86 8.14
์ถ”๋ก (Reasoning) 8.14 6.43
๊ธ€์“ฐ๊ธฐ(Writing) 8.71 6.86
๋ฌธ๋ฒ•(Grammar) 5.29 2.29
์นดํ…Œ๊ณ ๋ฆฌ ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท 
์ „์ฒด ์‹ฑ๊ธ€ 7.05 5.88

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: amazingvince/Not-WizardLM-2-7B
    parameters:
      weight: 1.0
  - model: CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2
    parameters:
      weight: 0.5
merge_method: linear
dtype: float16
Downloads last month
13
Safetensors
Model size
7.24B params
Tensor type
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for CarrotAI/OpenCarrot-Mix-7B

Finetuned
(2)
this model
Quantizations
1 model