--- license: mit library_name: transformers pipeline_tag: text-generation model-index: - name: Rubra-Phi-3-mini-128k-instruct results: - task: type: text-generation dataset: type: MMLU name: MMLU metrics: - type: 5-shot value: 66.66 verified: false - task: type: text-generation dataset: type: GPQA name: GPQA metrics: - type: 0-shot value: 29.24 verified: false - task: type: text-generation dataset: type: GSM-8K name: GSM-8K metrics: - type: 8-shot, CoT value: 74.09 verified: false - task: type: text-generation dataset: type: MATH name: MATH metrics: - type: 4-shot, CoT value: 26.84 verified: false - task: type: text-generation dataset: type: MT-bench name: MT-bench metrics: - type: GPT-4 as Judge value: 7.45 verified: false tags: - function-calling - tool-calling - agentic - rubra - conversational language: - en --- # Rubra Phi-3 Mini 128k Instruct Original model: [rubra-ai/Phi-3-mini-128k-instruct](https://huggingface.co/rubra-ai/Phi-3-mini-128k-instruct) ## Model description The model is the result of further post-training [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct). This model is designed for high performance in various instruction-following tasks and complex interactions, including multi-turn function calling and detailed conversations. ## Training Data The model underwent additional training on a proprietary dataset encompassing diverse instruction-following, chat, and function calling data. This post-training process enhances the model's ability to integrate tools and manage complex interaction scenarios effectively. ## How to use Refer to https://docs.rubra.ai/inference/llamacpp for usage. Feel free to ask/open issues up in our Github repo: https://github.com/rubra-ai/rubra