--- language: - en license: apache-2.0 datasets: - Open-Orca/SlimOrca model-index: - name: experiment2-cause-non-qLoRa results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.32 name: normalized accuracy - type: acc_norm value: 61.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.92 name: normalized accuracy - type: acc_norm value: 83.72 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.3 name: accuracy - type: acc value: 64.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.47 - type: mc2 value: 47.34 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.06 name: accuracy - type: acc value: 79.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 33.59 name: accuracy - type: acc value: 40.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa name: Open LLM Leaderboard --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample SlimOrca dataset. # Guthub https://github.com/trailerAI # License Apache License 2.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NLUHOPOE__experiment2-cause-non-qLoRa) | Metric |Value| |---------------------------------|----:| |Avg. |60.44| |AI2 Reasoning Challenge (25-Shot)|60.32| |HellaSwag (10-Shot) |82.92| |MMLU (5-Shot) |62.30| |TruthfulQA (0-shot) |45.47| |Winogrande (5-shot) |78.06| |GSM8k (5-shot) |33.59| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NLUHOPOE__experiment2-cause-non) | Metric |Value| |---------------------------------|----:| |Avg. |62.69| |AI2 Reasoning Challenge (25-Shot)|61.09| |HellaSwag (10-Shot) |83.72| |MMLU (5-Shot) |64.13| |TruthfulQA (0-shot) |47.34| |Winogrande (5-shot) |79.48| |GSM8k (5-shot) |40.41|