--- license: llama3 library_name: transformers tags: - mergekit - merge base_model: - meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: Llama-3-15B-Instruct-zeroed results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.69 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=elinas/Llama-3-15B-Instruct-zeroed name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 78.64 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=elinas/Llama-3-15B-Instruct-zeroed name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.97 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=elinas/Llama-3-15B-Instruct-zeroed name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.46 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=elinas/Llama-3-15B-Instruct-zeroed name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=elinas/Llama-3-15B-Instruct-zeroed name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=elinas/Llama-3-15B-Instruct-zeroed name: Open LLM Leaderboard --- # Llama-3-15B-Instruct-zeroed This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method while zeroing `o_proj` and `down_proj` which led to an decrease in perplexity (good) compared to similar 15B merges. This was a recommendation from [Charles Goddard](https://huggingface.co/chargoddard) - thank you for sharing the method of merging as well as Toasty Pigeon for bringing it to my attention! ## Finetuned Version A finetuned version of this model can be found at [elinas/Llama-3-15B-Instruct-zeroed-ft](https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed-ft) which seems to improve performance. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 24] model: meta-llama/Meta-Llama-3-8B-Instruct - sources: - layer_range: [8, 24] model: meta-llama/Meta-Llama-3-8B-Instruct parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [8, 24] model: meta-llama/Meta-Llama-3-8B-Instruct parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [24, 32] model: meta-llama/Meta-Llama-3-8B-Instruct ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_elinas__Llama-3-15B-Instruct-zeroed) | Metric |Value| |---------------------------------|----:| |Avg. |67.75| |AI2 Reasoning Challenge (25-Shot)|61.69| |HellaSwag (10-Shot) |78.64| |MMLU (5-Shot) |67.97| |TruthfulQA (0-shot) |52.46| |Winogrande (5-shot) |74.98| |GSM8k (5-shot) |70.74|