--- license: apache-2.0 inference: false base_model: microsoft/Phi-3-vision-128k-instruct base_model_relation: quantized tags: [green, llmware-vision, p3, onnx, emerald] --- # phi-3-vision-onnx **phi-3-vision-onnx** is an ONNX int4 quantized version of [microsoft/Phi-3-vision-128k-instruct](https://www.huggingface.co/Qwen/microsoft/Phi-3-vision-128k-instruct), providing an inference implementation, optimized for AI PCs. This is a vision-to-text model from the Phi3 release series and is a very high-quality innovative small model that accepts multi-modal inputs (image/video, text). ### Model Description - **Developed by:** microsoft - **Quantized by:** microsoft - **Model type:** phi-3-vision - **Parameters:** 3.8 billion - **Model Parent:** microsoft/Phi-3-vision-128k-instruct - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Uses:** Multimodal LLM - **Quantization:** int4 ## Model Card Contact [llmware on github](https://www.github.com/llmware-ai/llmware) [llmware on hf](https://www.huggingface.co/llmware) [llmware website](https://www.llmware.ai)