--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE library: llama.cpp library_link: https://github.com/ggerganov/llama.cpp language: - en pipeline_tag: text-generation tags: - nlp - code - gguf --- ## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) ## Quantized Model Files Phi-1 is available in several formats, catering to different computational needs: - **ggml-model-q4_0.gguf**: 4-bit quantization, offering a compact size of 2.1 GB for efficient inference. - **ggml-model-q8_0.gguf**: 8-bit quantization, providing robust performance with a file size of 3.8 GB. - **ggml-model-f16.gguf**: Standard 16-bit floating-point format, with a larger file size of 7.2 GB for enhanced precision. These formats, ranging from 4-bit to 16-bit, accommodate various computational environments, from resource-constrained devices to high-end server