XGBoost_Gaze / MiniCPM-V /docs /xinference_infer.md
Demo750's picture
Upload folder using huggingface_hub
569f484 verified

A newer version of the Gradio SDK is available: 5.9.1

Upgrade

Xinference Infer

Xinference is a unified inference platform that provides a unified interface for different inference engines. It supports LLM, text generation, image generation, and more.but it's not bigger than Swift too much.

Xinference install

Xinference can be installed simply by using the following easy bash code:

pip install "xinference[all]"

Quick start

The initial steps for conducting inference with Xinference involve downloading the model during the first launch.

  1. Start Xinference in the terminal:
xinference
  1. Start the web ui.
  2. Search for "MiniCPM-Llama3-V-2_5" in the search box.

alt text

  1. Find and click the MiniCPM-Llama3-V-2_5 button.
  2. Follow the config and launch the model.
Model engine : Transformers
model format : pytorch
Model size   : 8
quantization : none
N-GPU        : auto
Replica      : 1
  1. After first click the launch button,xinference will download the model from huggingface. We should click the webui button.

alt text

  1. Upload the image and chatting with the MiniCPM-Llama3-V-2_5

Local MiniCPM-Llama3-V-2_5 Launch

If you have already downloaded the MiniCPM-Llama3-V-2_5 model locally, you can proceed with Xinference inference following these steps:

  1. Start Xinference
xinference
  1. Start the web ui.
  2. To register a new model, follow these steps: the settings highlighted in red are fixed and cannot be changed, whereas others are customizable according to your needs. Complete the process by clicking the 'Register Model' button.

alt text alt text

  1. After completing the model registration, proceed to 'Custom Models' and locate the model you just registered.
  2. Follow the config and launch the model.
Model engine : Transformers
model format : pytorch
Model size   : 8
quantization : none
N-GPU        : auto
Replica      : 1
  1. After first click the launch button,Xinference will download the model from Huggingface. we should click the chat button. alt text
  2. Upload the image and chatting with the MiniCPM-Llama3-V-2_5

FAQ

  1. Why can't the sixth step open the WebUI?

Maybe your firewall or mac os to prevent the web to open.